Inequality and macroeconomic models

By Stéphane Auray and Aurélien Eyquem

“All models are wrong, some are useful.” This quote from George Box has often been used to justify the simplistic assumptions made in macroeconomic models. One of these has long been criticised: the fact that the behaviour of households, although differing (heterogeneous) in their individual characteristics (age, profession, gender, income, wealth, state of health, labour market status), can be approximated at the macroeconomic level by that of a so-called “representative” agent. This assumption of a representative agent means considering that the heterogeneity of agents and the resulting inequalities are of little importance for aggregate fluctuations.



Economists are not blind – they are well aware that households, companies and banks are not all identical. Many studies have looked at the effects of household heterogeneity on aggregate savings and, consequently, on macroeconomic fluctuations[1]. On the other hand, some studies propose so-called “overlapping generations” models in which age plays an important role[2].

Most often, households in these models move from one state to another (from employment to unemployment, from one level of skills and therefore of income to another, from one age to another) and the probabilities of a transition are known. In the absence of insurance mechanisms (unemployment, redistribution, health), the expected risk of a transition produces an expected risk of income or health, which leads agents to save in order to insure themselves. Furthermore, differences in savings and consumption behaviour are also likely to lead to differences in labour supply behaviour. Finally, changes in the macroeconomic environment (changes in the unemployment rate, interest rates, wages, taxes and contributions, public spending, insurance schemes) potentially affect these individual probabilities and the resulting microeconomic behaviour. Aggregate risks therefore affect each household differently, depending on its characteristics, generating general equilibrium and redistributive effects. However, this relatively old work has come up against two obstacles.

The first is technical: tracking the evolution of the distribution of agents over time is mathematically complex. It is of course possible to reduce the extent of the heterogeneity by limiting ourselves to two agents (or two types of agent): those with access to the financial markets and those who are forced to consume their income at each period[3], working people and pensioners, etc. But while these simplified models make it possible to understand and validate broad intuitions, they are still limited, particularly from an empirical point of view. They do not, for example, allow us to carry out a realistic study of changes in inequality across the entire distribution of income or wealth.

The second obstacle is more profound: several of these studies have concluded that models with heterogeneous agents, although much more complex to manipulate, did not perform significantly better than models with representative agents in terms of aggregate macroeconomic validation (Krusell and Smith, 1998). Admittedly, they were not aiming to study changes in inequality or the macroeconomic impact, but rather the contribution of agent heterogeneity to aggregate dynamics. In fact, the subject of inequality has long been considered to be almost or fully orthogonal to macroeconomic analysis (at least when considering fluctuations) and to fall more within the remit of labour economics, microeconomics or collective choice theory. As a result, heterogeneous agent models have long suffered from the image of being an unnecessarily complex subject in the macroeconomic analysis of fluctuations.

In recent years, these models have undergone an exceptional revival, to the point where they seem to be becoming the standard for macroeconomic analysis. The first obstacle has been overcome by an exponential increase in the computing power used to solve and simulate these models, combined with the development of powerful mathematical tools that render their solution easier (Achdou et al., 2022). The second obstacle has been overcome by the three-pronged movement that we describe below: the growing body of work (particularly empirical work) demonstrating the importance of income and wealth inequalities for issues typically addressed by macroeconomics – over and above their intrinsic interest; the development of tools for measuring inequalities that make it possible to reconcile them with macroeconomic analysis; and the refinement of the assumptions made in models with heterogeneous agents.

First, numerous empirical studies show that precautionary savings plays a major role in macroeconomic fluctuations (Gourinchas and Parker, 2001). But precautionary savings and the sensitivity of savings (and household spending) to income are not identical for all households. Indeed, empirical work suggests that the aggregate marginal propensity to consume (MPC) lies between 15% and 25% (Jappelli and Pistaferri, 2010), and that the MPC of a large proportion of the population is higher than the MPC obtained in representative agent models. In representative agent models at the top of the wealth distribution, the latter is approximately equal to the real interest rate, and therefore much lower than the empirical estimates (see Kaplan and Violante, 2022). It is therefore critical to understand the origin of a high aggregate MPC based on solid microeconomic foundations, particularly if we wish to carry out a realistic study of the impact of macroeconomic policies (monetary, fiscal, etc.) that rely on multiplier effects linked to the distribution of MPCs.

In recent years, an abundant and increasingly well-developed empirical literature has been dealing with issues relating to income inequality. Following the seminal article by Atkinson (1970) along with more recent developments[4], we now have long data series that measure income inequality before and after tax, along with wealth inequality, across the entire household distribution for a large number of countries. Finally, what are known as Distributional National Accounts make it possible to compare in great detail the predictions of macroeconomic models using heterogeneous agents with microeconomic data that are totally consistent with the framework of macroeconomic analysis.

Finally, the heterogeneous agent models themselves have evolved. The “first generation” models generally considered a single asset (physical capital, in other words, company shares) and prevented agents from taking on debt, which led them to save for precautionary reasons. These hypotheses were not able to explain why MPCs were high. They failed to  correctly replicate the observed distribution of income and, above all, of wealth. In reality, households have access to several assets (liquid savings, housing, equities), and the composition of their wealth differs greatly depending on the level of wealth: households generally start saving in liquid form, then invest their savings in property by taking out bank loans, and finally diversify their savings (only for those with the greatest wealth, above the 60th percentile of the wealth distribution) by buying shares (Auray, Eyquem, Goupille-Lebret and Garbinti, 2023). In doing so, a large proportion of the population ends up in debt in order to build up their property wealth, which is thus not very liquid. Although they have high incomes, many households consume almost all their income, which reduces their capacity for self-insurance through savings. This increases their MPC (and therefore the aggregate MPC) in line with empirical observations (Kaplan, Violante and Weidner, 2014).

Macroeconomists can now fully integrate the analysis of inequalities in income, wealth and health into models based on more realistic microeconomic behaviour. They can re-examine the consensus reached on the conduct of monetary[5] or fiscal[6] policies and examine their redistributive effects. They are also in a position to quantify the aggregate and redistributive effects of trade or environmental policies, which are or will be at the heart of their political acceptability – giving rise to new horizons for less wrong, more useful models.

[1] See in particular Bewley (1977), Campbell and Mankiw (1991), Aiyagari (1994), Krusell and Smith (1998), Castaneda, Diaz-Gimenez and Rios-Rull (1998).

[2] See the work of Allais (1947) and Samuelson (1958), and among others De Nardi (2004).

[3] See Campbell and Mankiw (1989) ; Bilbiie and Straub (2004) ; Gali, Lopez-Salido and Valles (2007).

[4] See (2001, 2003), Piketty and Saez (2003, 2006), Atkinson, Piketty and Saez (2011), Piketty, Saez and Zucman (2018) and Alvaredo et al. (2020).

[5] Kaplan, Moll and Violante (2018); Auclert (2019); Le Grand, Martin-Baillon and Ragot (2023).

[6] Heathcote (2005); Le Grand and Ragot (2022); Bayer, Born and Luetticke (2020).   




How do rising interest rates impact French economic growth? An overview of macroeconometric models

By Elliot Aurissergues

The year 2022 was marked by a sharp inflationary surge in the United States and the euro zone. At the end of October, the inflation rate hit 7.7% over one year in the US, 10.6% in the euro zone and 7.1% in France, i.e. between 5 and 8 points above the inflation targets of the US Federal Reserve (Fed) and the European Central Bank (ECB). In response, the two central banks significantly tightened monetary policy. The Fed raised its key interest rate from 0% in March 2022 to 4% in November 2022. While the ECB’s key rate hike has been more measured for the moment, long-term rates on public debt in European countries have risen sharply, gaining between 250 and 300 basis points in one year in France and Germany, and even more in euro zone countries where the risk on public debt is perceived as higher. This increase is close to what is anticipated for short-term rates in 2023. The OFCE thus forecasts that the ECB’s key rate will reach 3% in the third quarter of 2023[1].



It is not easy to estimate the impact this tightening will have on economic activity. There is a very rich literature on the transmission of a monetary shock to the rest of the economy, using methods that, while conceptually similar or even equivalent, in practice lead to a wide variety of results. We are particularly interested here in the impact of a rate shock using macroeconometric models of the French economy. For this overview, we chose three models: the Mésange model co-developed by the French Treasury Dept and the INSEE statistics agency (see Bardaji et al., 2017), the FR BDF model of the Banque de France (see Lemoine et al., 2019, and Aldama and Ouvrard, 2020, for the notebook on variants), along with the OFCE e-mod model used in Heyer and Timbeau (2006).

What is a macroeconometric model?

Macroeconometric models are the oldest class of macroeconomic models. They combine accounting relationships (or equations) with estimated behavioural equations in order to make predictions about an economy’s response to shocks. The major macroeconomic variables (wages, prices, household consumption, investment, employment) are expressed in the form of error correction equations. In the long run, these converge towards a certain target, which is determined by economic theory. Thus household consumption expenditure will converge on a certain fraction of household disposable income in the long term. In contrast, short-term behaviour is left much freer in order to achieve a good forecasting performance. The interest rate is essentially a long-term factor. The impact of a rate shock is limited initially and becomes more important as the gap between the variables and their long-term targets closes.

The Mésange model

We consider the variant published in Bardaji et al. (2017). The results are summarised in Table 1. A monetary shock of 100 basis points (or 1%) results in a fall in GDP of 0.2% after one year, 0.8% after three years and 3% in the long run. This decline is due in particular to a sharp drop in investment: -2.7% after 3 years (-3.4% for the GFCF of non-financial companies) and -5.5% in the long term, but all components of aggregate demand are hit, including exports, which fall by 3.3% in the long haul. Surprisingly, monetary tightening is reflected in higher prices in the Mésange model. Value-added market prices rise by 0.1% after one year, 0.8% after three years and more than 6% over a longer period! This price increase makes the economy less competitive, hence the fall in exports. Two transmission channels are at work.  The first is the direct negative impact of higher interest rates on business investment. In the Mésange model, the demand for capital and therefore investment depends in the long run on the cost of capital. The intuition is in line with standard microeconomic theory: companies choose the combination of capital and labour that maximises their profit. A rise in the cost of capital leads firms to substitute labour for capital and pushes down investment. The user cost of capital is composed of the depreciation of capital, the long-term interest rate on government debt and the terms of the risk premium between government bonds and corporate loans, while the long-term elasticity of investment to this user cost is estimated to be 0.44. Assuming a 10% capital depreciation rate, initial nominal rates at 0, and ignoring any risk premia, a 1% increase in the interest rate translates in the long run into a 5% decrease in investment. The second, much less intuitive channel plays a key role in this variant and explains in particular the response of prices and exports.  An increase in the cost of capital means higher production costs for business. Firms pass on these higher costs in their selling prices, leading to higher inflation and lower competitiveness. Portier, Beaudry and Hou (2022) recently explored this positive impact of a rise in interest rates on prices via the cost of capital channel. Note that this effect is difficult to detect using more agnostic empirical methods (unrestricted VAR models, local projections). While these sometimes show positive effects in terms of how a rise in rates impacts prices, the effect is usually either insignificant or clearly negative over longer time horizons (see for example Miranda-Agrippino and Ricco, 2021).

The FR-BDF model

Compared to Mésange, one of the important features of the FR BDF model is the way it treats agents’ expectations. This specificity explains why two interest rates intervene in the dynamics of the model. The short-term interest rate, determined by the European Central Bank, affects agents’ expectations, while the long-term interest rate on public bonds affects the long-term demand for production factors. The long-term elasticity of investment to the cost of capital is 0.5, which is slightly higher than in Mésange. The FR BDF model does not incorporate systematic relationships between long and short rates. To generate the effect of a rate shock in the model, it is therefore necessary to add two distinct analytical variants, the first simulating the impact of a permanent rise in the short-term rate, the second the impact of a rise in the long-term rate. These two variants are available in Aldama and Ouvrard (2020). The effects of a rate shock are much weaker than in Mésange. After 3 years, real GDP decreases by 0.3%, against 0.9% in Mésange. This is due in particular to a much smaller reduction in GFCF (-1.9% compared to -3.4% after 3 years in Mésange). The effects on prices are more in line with the usual Keynesian intuition, with a 0.2% fall in the GDP deflator after 3 years. The resulting improvement in competitiveness leads to an increase in exports of 0.2% after 3 years (compared to a 0.2% decrease in Mésange). There are two main reasons for these differences. First, the transmission channel of the cost of capital to prices is neutralised in the FR BDF model. While value-added prices are determined by the cost of production factors and a constant markup, as in Mésange, the cost of the capital factor that enters the price equation is not the user cost of capital but the marginal return to capital. Second, investment reacts much less strongly in the short term to the growth in value added in FR-BDF and is characterised by greater inertia. The negative investment shock therefore spreads more slowly.

The e-mod model

The impact of a rate shock in the version of the e-mod model developed by Heyer and Timbeau (2006) is closer to the results of FR BDF than to Mésange. However, the economic mechanism is different. The interest rate shock is transmitted via a fall in asset prices, particularly property prices, which leads to a reduction in consumption via a wealth effect. After 3 years, real GDP falls by 0.4%, a fall that is driven by the reduction in household spending (consumption and investment) (-0.6%) and, to a lesser extent, in business investment (-1.2%)[2]. As in FR-BDF, the rate shock negatively impacts prices. The GDP and household consumption deflators fall by 0.1%.

What does this overview tell us?

The main transmission channel of a rate shock in macroeconometric models involves the user cost of capital and business and household investment. The magnitude of this negative effect on investment depends on the long-run elasticity of the demand for capital to its user cost. These models estimate this elasticity econometrically. While criticisms can be made of the estimation methods, the value ultimately adopted (on the order of 0.5) seems plausible relative to other estimation methods (for example, a meta-study by Gechert et al., 2022, estimates it at 0.3) and implies moderate substitutability between production factors. It is also possible that the rate shock impacts household consumption via wealth effects, even if this channel remains controversial. In addition to these primary effects on aggregate demand, there are multiplier and accelerator effects that also vary between the models, adding to the uncertainty. We find the channel of production costs, which has a certain importance in the dynamics of the Mésange model, implausible. This leads us to retain in this paper the results of Aldama and Ouvrard (2020) and Heyer and Timbeau (2006).

The impact of monetary tightening on economic activity will depend not only on the response of the economy to a generic shock but also on the size of the current shock. In the October 2022 OFCE forecast, the one-year interest rate hike is projected to be 300 basis points, but this hike cannot be used as is. First, this rise is not coming as a complete surprise. Interest rates fell to very low levels during the Covid-19 crisis, and normalisation was expected to start by 2022, albeit at a very gradual pace.  Second, this is a rise in the nominal rate. The relevant interest rate for the transmission channels of monetary policy as they appear in macroeconometric models is the real rate. This would not pose a problem if the rate hike were a pure monetary policy shock, i.e. if the central bankers had decided overnight to raise rates without any reason. But the rise that we are experiencing is a response to an inflationary shock, a shock that is affecting real interest rates independently of any changes in the nominal rate.  The solution adopted by the OFCE in its October 2022 forecasts[3] was to retain the change in the real rate using certain measures of inflation expectations. This leads to a rate shock of around 2%.

On the basis of the two variants that we have chosen, a rate shock of around 2% could, all else being equal, cause French GDP to fall between 0.6% and 0.8% by 2024/2025. The impact on prices would be negative but modest, between 0.3% and 0.4%. This estimate obviously remains very uncertain. As explained in the previous paragraph, calculating the magnitude of the shock itself requires making major assumptions. The models used are estimated with limited information and therefore have potentially broad confidence intervals.  More generally, the validity of this estimate of the effects of a rate shock is contingent on the validity of the models used.

Bibliography

Aldama P. and J.-F. Ouvrard, 2020, “Variantes analytiques du modèle de prévision et simulation de la Banque de France pour la France” [Analytical variants of the Banque de France forecasting and simulation model for France], Document de travail Banque de France, no. 750.

Bardadji J., B. Campagne, M. Khder, Q. Lafféter and O. Simon, 2017, “Le modèle macroéconométrique Mésange : réestimation et nouveautés” [The Mesange macroeconometric model: Re-estimation and innovations], Document de travail INSEE.

Beaudry P., S. Hou and F. Portier, 2020, “Monetary policy when the Philips Curve is quite flat”, CEPR discussion paper.

Gechert S., T. Havranek, Z. Irsova and D. Kolcunova, 2022, “Measuring capital-labor substitution: The importance of method choices and publication bias”, Review of Economic Dynamics, no. 45, pp. 55-82.

Heyer E. and X. Timbeau, 2006, “Immobilier et politique monétaire” [Real estate and monetary policy], Revue de l’OFCE, no. 96, pp. 115-151.

Miranda-Agrippino S. and G. Ricco, 2021, “The transmission of monetary policy shocks”, American Economic Journal : Macroeconomics, vol. 13, no. 3, pp. 74-107.

OFCE, E. Heyer and X. Timbeau (dirs.), 2022, Perspectives 2022-2023 pour l’économie mondiale et la zone euro”, [2022-2023 Forecast for the Global Economy and the Euro Zone], Revue de l’OFCE, no. 178.


[1] See Table 2 in Appendix 1 of the OFCE forecast in the section Tour du monde de la situation conjoncturelle, [Overview of the economic situation], OFCE Forecasting and Analysis Department, under the direction of E. Heyer and X. Timbeau.

[2] These figures are obtained by dividing the results presented in Heyer and Timbeau (2006) by two, as the authors simulated an interest rate rise of 200 bps. As the e-mod model is not completely linear, the results are an approximation.

[3] See Box 2 in Perspectives 2022-2023 pour l’économie mondiale et la zone euro, [2022-2023 Forecast for the Global Economy and the Euro Zone], E. Heyer and X.Timbeau (dirs.).




Missing deflation – unique to America?

By Paul Hubert, Mathilde Le Moigne

Was the way inflation unfolded after the 2007-2009 crisis atypical? According to Paul Krugman: “If inflation [note: in the United States] had responded to the Great Recession and aftermath in the same way it did in previous big slumps, we would be deep in deflation by now; we aren’t.” Indeed, after 2009, inflation in the United States remained surprisingly stable given actual economic developments. Has this phenomenon, which has been described as “missing deflation”, been observed in the euro zone?

Despite the deepest recession since the 1929 crisis, the inflation rate remained stable at around 1.5% on average between 2008 and 2011 in the United States, and 1% in the euro zone. Does this mean that the Phillips curve, which links inflation to real activity, has lost its empirical validity? In a note in 2016, Olivier Blanchard recalls on the contrary that the Phillips curve, in its simplest original version, remains a valid instrument for understanding the links between inflation and unemployment, despite this “missing disinflation”. Blanchard notes, however, that the link between the two variables has weakened because inflation is increasingly dependent on expectations of inflation, which are themselves anchored in the US Federal Reserve’s inflation target. In their 2015 article, Coibion ​​and Gorodnichenko explain the missing deflation in the United States by the fact that inflation expectations tend to be influenced by the most visible price changes, such as changes in the price of a barrel of oil. Since 2015, we have seen a drop in inflation expectations concomitant with the decline in oil prices.

The difficulty in accounting for recent changes in inflation by using the Phillips curve led us in a recent article to evaluate its potential determinants and to consider whether the euro zone has also experienced a phenomenon of “missing deflation”. Based on a standard Phillips curve, we did not find the conclusions of Coibion and Gorodnichenko when we consider the euro zone as a whole. In other words, real activity and inflation expectations give a good description of the way inflation is behaving.

This result seems to come, however, from a bias in aggregation between national inflation behaviours in the euro zone. In particular, we find a notable divergence between the countries of northern Europe (Germany, France), which show a general tendency towards missing inflation, and the more peripheral countries (Spain, Italy, Greece), which are exhibiting periods of missing deflation. This divergence nevertheless shows up from the beginning of our sample, that is to say, in the first years when the euro zone was created, and seems to be absorbed from 2006, without undergoing any notable change during the 2008-2009 crisis.

In contrast to what happened in the United States, it seems that the euro zone did not experience missing deflation as a result of the 2008-2009 economic and financial crisis. On the contrary, it seems that divergences in inflation in Europe predate the crisis and tended to be absorbed by the crisis.

 




The secular stagnation equilibrium

By Gilles Le Garrec et Vincent Touzé

The economic state of slow growth and underemployment, coupled with low inflation or even deflation, has recently been widely discussed, in particular by Larry Summers, under the label of “secular stagnation”. The hypothesis of secular stagnation was expressed for the first time in 1938 in a speech by A. Hansen, which was finally published in 1939. Hansen was worried about insufficient investment and a declining population in the United States, following a long period of strong economic and demographic growth.

In a Note by the OFCE (no. 57 dated 26 January 2016 [in French]), we studied the characteristics and dynamics of a secular stagnation equilibrium.

A state of secular stagnation results when an abundance of savings relative to demand for credit pushes the “natural” real interest rate (what is compatible with full employment) below zero. But if the real interest rate permanently remains above the natural rate, then the result is a chronic shortage of aggregate demand and investment, with a weakened growth potential.

To counter secular stagnation, the monetary authorities first reduced their policy rates, and then, having reached the zero lower bound (ZLB), they implemented non-conventional policies called quantitative easing. The central banks cannot really force interest rates to be very negative, otherwise private agents would have an interest in keeping their savings in the form of banknotes. Beyond quantitative easing, what other policies might potentially help pull the economy out of secular stagnation?

To answer this crucial question, the model developed by Eggertsson and Mehrotra in 2014 has the great merit of clarifying the mechanisms behind a fall into long-term stagnation, and it is helping macroeconomic analysis to update its understanding of the multiplicity of equilibria and the persistence of the crisis. Their model is based on the consumption and savings behaviour of agents with a finite lifespan in a context of a rationed credit market and nominal wage rigidity. As for the monetary policy conducted by the central bank, this is set at a nominal rate using a Taylor rule.

According to this approach, secular stagnation was initiated by the 2008 economic and financial crisis. This crisis was linked to high household debt, which ultimately led to credit rationing. In this context, credit rationing leads to a fall in demand and excess savings. Consequently, the real interest rate falls. In a situation of full employment, if credit tightens sharply, the equilibrium interest rate becomes negative, which leaves conventional monetary policy toothless. In this case, the economy plunges into a lasting state of underemployment of labour, characterised by output that is below potential and by deflation.

In the model proposed by Eggertsson and Mehrotra, there is no capital accumulation. As a result, the underlying dynamic is characterized by adjustments without transition from one steady state to another (from full employment to secular stagnation if there’s a credit crisis, and vice versa if credit doesn’t tighten much).

To extend the analysis, we considered the accumulation of physical capital as a prerequisite to any productive activity (Le Garrec and Touzé, 2015.). This highlights an asymmetry in the dynamics of secular stagnation. If the credit constraint is loosened, then capital converges on its pre-crisis level. However, exiting the crisis takes longer than entering it. This property suggests that economic policies used to fight against secular stagnation must be undertaken as soon as possible.

There are a number of lessons offered by this approach:

  • To avoid the ZLB, there is an urgent need to create inflation while avoiding speculative asset “bubbles”, which could require special regulation. The existence of a deflationary equilibrium thus raises the question of the appropriateness of monetary policy rules that are overly focused on inflation.
  • One should be wary of the deflationary effects of policies to boost potential output. The right policy mix is to support structural policies with a sufficiently accommodative monetary policy.
  • Cutting savings to raise the real interest rate (e.g. by facilitating debt) is an interesting possibility, but the negative impact on potential GDP should not be overlooked. There is a clear trade-off between exiting secular stagnation and depressing potential GDP. One interesting solution could be to finance infrastructure, education or R&D (higher productivity) through government borrowing (raising the real equilibrium interest rate). Indeed, an aggressive investment policy (public or private) funded so as to push up the natural interest rate can meet a dual objective: to support aggregate demand and to develop the productive potential.

 




Elections and the (first) derivative of unemployment: the turnaround strategy

By Guillaume Allègre, g_allegre

A ministerial adviser recently explained to me what he thinks is the strategy of the French President on macroeconomic management and unemployment, which could be called a turnaround strategy: “In relation to the presidential elections, the goal is to reduce unemployment in 2016-2017. The way people vote is based on the way unemployment has been changing just in the last year or even the last 6 months. Like for Jospin in 2002.” The belief that for unemployment and the economy in general what counts is the derivative, i.e. the recent evolution and not the actual level, has deep roots in the technocratic-political milieu: “it’s the derivative, stupid!” is the new “it’s the economy, stupid!“ (the maxim of Bill Clinton’s election strategist in 1992).

This belief stems in part from an intuition confirmed by a well-known psychological experiment. Participants in the study were subjected to two painful experiences during which one of their hands is immersed in ice water. One version lasts 60 seconds and the other 90 seconds. In the second experiment, the first 60 seconds are the same as in the first, while the 30 added seconds are a bit less painful (the experimenter pours some warm water into the container). Later, the participants must choose which of the two experiments to repeat: 80% chose the longer one. This seems irrational, because in the longer experiment the total amount of pain is greater. To an objective observer, this is what should count (“the area under the curve, or the integral”). But the participants have a selective memory: they are more strongly influenced by the representative moments of the experience and in particular here by the improvement at the end of the test. Daniel Kahneman, the 2002 Nobel Prize-winner in economics for his work on biases in judgment, which is popularized in a book that can be found here, distinguishes two representative moments during an unpleasant episode: the peak of suffering and the end [1].

Economists, especially in America, have developed econometric models of electoral forecasts to estimate the links between election results and the economy. The popularity of these models varies with their predictive power for the election: in 1992, half of the models predicted an easy re-election for George Bush; in 1996, the re-election of Clinton was reliably predicted; but in 2000, virtually all the models forecast a landslide victory for Al Gore … And the model that had the closest forecast in that election (0.6%) was off by 5 points in the next one. Of course, thanks to the proliferation of predictions, it is always possible to find a model with a good record for the time-being, such as Paul the Octopus (see Wiki).

Despite this motley record, these politico-econometric models have been imported into France. In their generic form, they attempt to explain the percentage of the vote going to a candidate or a party based on economic variables (GDP, unemployment, or levels or changes in income) and political variables (popularity of the President and the Prime Minister). The vast majority of models adopt as an economic variable changes in unemployment over a relatively short horizon, on average one year. The conclusion drawn from these empirical estimates is that French voters seem to have limited memories (Dubois, 2007).

But these studies are faced with a major problem: the low number of observations (nine presidential elections and thirteen legislative elections between 1958 and 2011). “We don’t vote often enough to suit the econometricians,” says Lafay (1995) [2]. In other words, the law of large numbers cannot be applied in this type of configuration. This is compounded by the fact that the number of variables that change in the context of these elections is almost as high as the number of elections (the existence of a government of multiple party “cohabitation”; legislative elections on their own or coupled to the presidential elections; the presence or absence of an incumbent in the presidential election; parliamentary elections held before the deadline; the presence or absence of a leftist candidate in the second round of the presidential elections; the importance of tactical voting when there are three candidates in the second round of legislative elections [triangulaires]; etc.).

There are other technical problems confronting the econometricians. In a comprehensive review of the literature analyzing 71 political-economic studies on voting in France between 1976 and 2006, Dubois describes the way these problems are handled – “if at all” – as “relatively frustrating”. Just as in the United States, the predictions meet with “varied success”. There is also the problem of what econometricians call “endogeneity”: the politico-economic models attempt to explain or predict the outcome of elections using economic variables (unemployment) and the popularity of the executive. However, there is little doubt that the popularity of the executive depends in part on unemployment levels and trends: given this, the lack of significance of changes in the longer-term economic variables may be explained by the fact that their impact is already included in the popularity of the executive. In short, these empirical studies are not sufficient to conclude that in economic terms, voters have short memories.

In the words of Kahneman, a machine for jumping to conclusions is at work: an intuition (the memory of voters is selective) that relies on psychological studies (whose object is distant) and is confirmed by econometric studies (not robust and therefore merely reproducing the researchers’ a priori assumptions). The story told is consistent, and it seems to be supported by facts … Upon reflection, it may seem scary that this kind of rhetorical cocktail is influencing the actions of politicians. This is even more frightening since, from an outside observer’s viewpoint and from the perspective of social welfare and hence the goals of public policy, what matters is obviously the level of unemployment over several years (its integral) and not the way it has changed in the last year (its first derivative)!

Many rules have been implemented at the European level, and now the national level too, to prevent the politicians heading up government from trying to win elections by pursuing policies that, while they may reduce short-term unemployment, also build up long-term deficits. From the Maastricht criteria (government deficit of less than 3% of GDP) to the recent European multiannual financial framework, these rules are justified by the belief that politicians are encouraged to pursue a lax fiscal policy since it does not take into account future generations, who, by construction, don’t vote. But if governments begin to believe that it is short-term economic developments that count, then the incentives are reversed, especially if it is easier to reduce unemployment after having first increased it, which would lead to a trajectory of weak growth and of excessively high unemployment. [3] In this case, the solution cannot come from governance through new binding rules, which in any case have so far proved to be ineffective. It is necessary to rely on the fact that this kind of turnaround strategy can work in electoral terms only if the citizens fail to understand that they are being manipulated. Exposing the manipulation is then more efficient than implementing rules. Duly noted.


[1] Consequently, those who follow this theory today should also deal with unemployment at its peak, and not merely with the way it is changing at the end of their mandate.

[2] Lafay J.-D. 1995, “Note sur l’élection présidentielle de 1995 et les apports de l’analyse économétrique des comportements électoraux”, mimeograph, LAEP, University of Paris 1. Cited by Dubois.

[3] This post – link – emphasizes that it was possible to achieve the same ratio of debt to GDP in 2032 by taking a path that would have reduced unemployment in the euro zone by 3 points in 2013.




High-impact economists

By Zakaria Babutsidze and Mark J. McCabe

This coming Monday, October 14 2013, as many as three economists will join the elite group of winners of the Sveriges Riksbanks Prize in Economic Sciences in Memory of Alfred Nobel. The Royal Swedish Academy of Sciences is responsible for the selection of the Laureates in Economic Sciences from among the candidates recommended by the Economic Sciences Prize Committee. In early October, the Academy selects the laureates through a majority vote.

Presumably, the main criterion for awarding this prize is the impact that the winner(s) have had on society.[1] Clearly the assessment of such an impact is not an easy and straightforward matter. It involves approaching the problem from a variety of perspectives, some more objective than others. It is probably safe to assume that researchers, whose work has had a large impact on society, have also influenced the discipline of economics.

In this post we report some statistics in order to assess different economists’ impact on the discipline. To do this, we use data from 48 peer-reviewed journals in Economics and Finance. Each of these journals has published at least five articles authored by one or more of the prize winners between 1969 and 2012 The data is collected from Thomson Reuters’ ISI Web of Science and contains all articles published in these 48 journals starting in 1956 and ending in 2012, and all citations to each of these articles up to (and including) 2012.

The impact of a researcher is often measured by the number of citations his or her work has generated, e.g.   the average annual number of citations to each article, weighted by the number of authors. This measure allows us to compare (albeit imperfectly) articles published at different points in time. However, for the case at hand, we are interested in the long run (or total) impact of the researcher. Therefore, our guiding indicator will be the total number of citations generated by the works of an economist weighted by the number of authors.

[Note: In identifying the pool of researchers eligible for the 2013 Prize, we excluded all past winners and, following the Academy’s guidelines, any other scholars who are now deceased.]

To get a sense of the citation impact of individual papers, take a look at Table 1, which lists the top 10 most cited articles in economics not authored by any prior prize winners. Although this provides an incomplete picture of a researcher’s total career impact, the Academy normally cites influential papers in the press releases (and explanatory materials) announcing the winners.

tab1010_blog

Table 1 features 11 economists that are eligible for the prize. Out of these 11 only one, Michael Jensen, has two papers in top 10. The table also demonstrates the large gap between the citation numbers of papers ranked first and second.

In what follows we present a researcher or career-level analysis. We assess the impact in two different ways. One approach utilizes all of the papers authors have written in their careers up to 2012 (this is a set comprising more than 170,000 papers). Our other approach is to utilize only the highest-impact papers (the top 100 most cited papers ever written).[2]

Before presenting the list of the most cited economists we first attempt to assess the power of the exercise. Namely, we ask what is the chance that people with high impact, as measured by number of citations, actually get awarded the prize? To answer this we take the top 25 most cited researchers according to each of the two criteria defined above (using all articles and the top 100 most cited articles) and see how many of those 25 have actually been awarded the prize. It turns out that in each case 13 out of 25 researchers have already won the prize.[3][4] These results suggest that number of citations received by researchers is a reasonable proxy for impact as defined by the Academy.

Next, the list of top 10 economists that are eligible for the Nobel Prize this year is presented in table 2. Panel A utilizes all articles in our dataset. Panel B of the table presents results using only the top 100 most cited articles. The columns titled Rank report the rank of the economist in the given list. The Total Rank columns refer to the rank of the economist in the list of high-impact economists that includes authors who have won the prize and those who are deceased. The Citations columns reports the total number of citations associated with the relevant set of articles by the author, weighted by the number of authors (e.g. if an article, authored by n authors, received z citations, then each listed author is credited with z/n citations).

 

tab2_1010_blog

As one can see from the table 2, eight economists appear in both of the lists. Five out of this eight are also featured in Table 1. These eight people are outstanding researchers by our measures and will most likely be among the economists considered for the 2013 prize.

The exercise that we have reported here measures the researchers’ impact on the discipline. However, the main guiding principle behind the Economic Science Prize is the impact on society. These two do not perfectly correlate. To see this, consider last year’s prize winners – Alvin Roth and Lloyd Shapley. They were awarded the prize “for the theory of stable allocations and the practice of market design”. Their work has generated significant social benefits. For example, Roth is a co-founder of the New England Program for Kidney Exchange, which enables organ transplantation where it otherwise could not be accomplished. However, if we apply our measures to Roth and Shapley, their performance is not outstanding. None of them have authored an article that enters the list of 100 most cited articles in economics; therefore they do not figure in our rankings using this particular methodology. When we consider all articles, Roth ranks 99th, while Shapley ranks 979th.

 

Postscript: In the discussion above, our primary intention is not to predict Monday’s winners. Nevertheless, it seems that the Economic Sciences Prize Committee selects a sub-discipline, or a narrow research area to recognize and only after this selects candidates who have contributed to the advancement in that area the most. Recall that we provided an analysis of total citations. We have not performed any breakdown by research areas and have not modeled the Committee’s area selection process. In contrast to our work, area selection is an important component of the well-known efforts by the Intellectual Property and Sciences business of Thomson Reuters to predict winners of the Economics Science Prize. This year they predict that one of the following three areas are likely to be honored by the Academy: microeconometrics, time-series econometrics or regulation theory.[5] In each of these three areas they predict two or three winners. In the table below, without further comment, we provide the list of people they predict to win the Nobel Prize alongside with their ranks in our high-impact economists list.

 

tab3_1010_blog

 

 


[1] In selecting a winner for the Economic Science Prize, the Swedish Academy follows the same principle that is used in awarding the five original Nobel Prizes, namely choosing those individuals, “…who have conferred the greatest benefit to mankind.”

[2] Book chapters and working papers are not included in our dataset.

[3] However, the identities of the 13 prize winners is somewhat different across the two procedures. When all articles are considered, the 13 winners among the top 25 most highly cited authors are (in decreasing order of importance): Becker, Lucas, Heckman, Stiglitz, Engle, Merton, Kahneman, Solow, Arrow, Granger, Akerlof, Krugman, Williamson. When the set of the top 100 articles is considered, the 13 winners are Engle, Becker, Heckman, Kahneman, Solow, Coase, Akerlof, Lucas, Arrow, Granger, Sharpe, Black and Scholes.

[4] Note that the lists also include a number of influential economists who died without winning the prize. These include Zvi Griliches, William Meckling, Charles Tiebout,  Amos Tversky and Halbert White.

[5] It is noteworthy that seven of the 10 papers listed in table 1 are in the general area of econometrics.




Austerity in Europe: a change of course?

By Marion Cochard and Danielle Schweisguth

On 29 May, the European Commission sent the members of the European Union its new economic policy recommendations. In these recommendations, the Commission calls for postponing the date for achieving the public deficit goals of four euro zone countries (Spain, France, Netherlands and Portugal), leaving them more time to hit the 3% target. Italy is no longer in the excessive deficit procedure. Only Belgium is called on to intensify its efforts. Should this new roadmap be interpreted as a shift towards an easing of austerity policy in Europe? Can we expect a return to growth in the Old Continent?

These are not trivial matters. An OFCE Note (no. 29, 18 July 2013) attempts to answer this by simulating three scenarios for fiscal policy using the iAGS model. It appears from this study that postponing the public deficit targets in the four euro zone countries does not reflect a real change of course for Europe’s fiscal policy. The worst-case scenario, in which Spain and Portugal would have been subject to the same recipes as Greece, was, it is true, avoided. The Commission is implicitly agreeing to allow the automatic stabilizers to work when conditions deteriorate. However, for many countries, the recommendations with respect to budgetary efforts still go beyond what is required by the Treaties (an annual reduction in the structural deficit of 0.5 percent of GDP), with as a consequence an increase of 0.3 point in the unemployment rate in the euro zone between 2012 and 2017.

We believe, however, that a third way is possible. This would involve adopting a “fiscally serious” position in 2014 that does not call into question the sustainability of the public debt. The strategy would be to maintain a constant tax burden and to allow public spending to keep pace with potential growth. This amounts to maintaining a neutral fiscal stimulus between 2014 and 2017. In this scenario, the public deficit of the euro zone would improve by 2.4 GDP points between 2012 and 2017 and the trajectory in the public debt would be reversed starting in 2014. By 2030, the public deficit would be in surplus (0.7%) and debt would be close to 60% of GDP. Above all, this scenario would lower the unemployment rate significantly by 2017. The European countries could perhaps learn from the wisdom of Jean de La Fontaine’s fable of the tortoise and the hare: “Rien ne sert de courir, il faut partir à point“, i.e. Slow and steady wins the race.




France: why such zeal?

By Marion Cochard and Danielle Schweisguth

On 29 May, the European Commission sent the members of the European Union its new economic policy recommendations. As part of this, the Commission granted France an additional two years to reach the deficit reduction target of 3%. This target is now set for 2015, and to achieve this the European Commission is calling for fiscal impulses of -1.3 GDP points in 2013 and -0.8 point in 2014 (see “Austerity in Europe: a change of course?”). This would ease the structural effort needed, since the implementation of the previous commitments would have required impulses of -2.1 and -1.3 GDP points for 2013 and 2014, respectively.

Despite this, the French government has chosen not to relax its austerity policy and is keeping in place all the measures announced in the draft Finance Act (PLF) of autumn 2012. The continuing austerity measures go well beyond the Commission’s recommendations: a negative fiscal impulse of -1.8 GDP point, including a 1.4 percentage point increase in the tax burden for the year 2013 alone. Worse, the broad guidelines for the 2014 budget presented by the government to Parliament on 2 July 2013 point to a structural effort of 20 billion euros for 2014, i.e. one percentage point of GDP, whereas the Commission required only 0.8 point. The government is thus demanding an additional 0.6 GDP point fiscal cut, which it had already set out in the multi-year spending program in the 2013 Finance Act.

The table below helps to provide an overview of the effort and of its impact on the French economy. It shows the trends in growth, in unemployment and in the government deficit in 2013 and 2014, according to three budget strategies:

  1. One using the relaxation recommended by the Commission in May 2013;
  2. One based on the budget approved by the government for 2013 and, a priori, for 2014;
  3. One based on an alternative scenario that takes into account the negative 1.8 GDP point fiscal impulse for 2013 and calculates a fiscal impulse for 2014 that would be sufficient to meet the European Commission’s public deficit target of -3.6%.

MC_DS_Tab_Blog29-07English (2)

According to our estimates using the iAGS model [1], the public deficit would be cut to 3.1% of GDP in 2014 in scenario (2), whereas the Commission requires only 3.6%. As a consequence of this excess of zeal, the cumulative growth for 2013 and 2014 if the approved budget is applied would be 0.7 percentage point lower than growth in the other two scenarios (0.8 point against 1.5 points). The corollary is an increase in unemployment in 2013 and 2014: the unemployment rate, around 9.9% in 2012, would thus rise to 11.1% in 2014, an increase of more than 350,000 unemployed for the period. In contrast, the more relaxed scenario from the European Commission would see a quasi-stabilization of unemployment in 2013, while the alternative scenario would make it possible to reverse the trend in unemployment in 2014.

While the failure of austerity policy in recent years seems to be gradually impinging on the position of the European Commission, the French government is persisting along its same old path. In the face of the social emergency that the country is facing and the paradigm shift that seems to be taking hold in most international institutions, the French government is choosing to stick to its 3% fetish.


[1] iAGS stands for the Independent Annual Growth Survey. This is a simplified model of the eleven main economies in the euro zone (Austria, Belgium, Finland, France, Germany, Greece, Ireland, Italy, Netherlands, Portugal and Spain). For more detail, see the working document Model for euro area medium term projections.




The tax credit to encourage competitiveness and jobs – what impact?

By Mathieu Plane

Following the submission to the Prime Minister of the Gallois Report on the pact for encouraging the competitiveness of French industry, the government decided to establish the tax credit to encourage competitiveness and jobs (“the CICE”). Based on the rising trade deficit observed over the course of the last decade, the sharp deterioration in business margins since the onset of the crisis and growing unemployment, the government intends to use the CICE to restore the competitiveness of French business and to boost employment. According to our assessment, which was drawn up using the e-mod.fr model as described in an article in the Revue de l’OFCE (issue 126-2012), within five years the CICE should help to create about 150,000 jobs, bringing the unemployment rate down by 0.6 point and generating additional growth of 0.1 GDP point by 2018.

The CICE, which is open to all companies that are assessed on their actual earnings and are subject to corporation tax or income tax, will amount to 6% of the total wage bill for wages below 2.5 times the minimum wage (SMIC), excluding employer contributions. It will come into force gradually, with a rate of 4% in 2013. The CICE’s impact on corporate cash flow will be felt with a lag of one year from the base year, meaning that the CICE will give rise to a tax credit on corporate profits from 2014. On the other hand, some companies could benefit in 2013 from an advance on the CICE expected for 2014. The CICE should represent about 10 billion euros for the 2013 fiscal year, 15 billion in 2014 and 20 billion from 2015. As for the financing of the CICE, half will come from additional savings on public spending (10 billion), the details of which have not been spelled out, and half from tax revenue, i.e. an increase in the standard and intermediate VAT rate from 1 January 2014 (6.4 billion) and stronger environmental taxation.

This reform is similar in part to a fiscal devaluation and in some respects bears similarities to the mechanisms of the “quasi-social VAT” (see Heyer, Plane, Timbeau [2012], “Economic impact of the quasi-social VAT” [in French]) that was set up by the Fillon government but eliminated with the change of the parliamentary majority as part of the second supplementary budget bill in July 2012.

According to our calculations using 2010 DADS data, the CICE would lower average labour costs by 2.6% in the market sector. The sectors where labour costs would be most affected by the measure are construction (-3.0%), industry (-2.8%) and market services (-2.4%). The ultimate sectoral impact of the measure depends both on the reduction in labour costs and on the weight of wages in value added in a given sector. Overall, the CICE would represent 1.8% of the value added of industrial enterprises, 1.9% of the value added in construction and 1.3% in market services. In total, the CICE would represent 1.4% of the value added in market sector companies. According to our calculations, the total value of the CICE would be 20 billion euros: 4.4 billion in industry, 2.2 billion in construction and 13.4 billion for market services. Industry would therefore recover 22% of the total spending, i.e. more than its share of value added, which is only 17%. While this measure is intended to revive French industry, this sector would nevertheless not be the primary beneficiary of the measure in absolute value, but, along with the construction sector, has the best exposure relatively speaking due to its wage structure. Furthermore, industry can benefit from knock-on effects related to reductions in the prices of inputs generated by the lowering of production costs in other sectors.

The expected effects of the CICE on growth and employment differ in the short and long term (see graphic). By giving rights in 2014 based on the 2013 fiscal year, the CICE will have positive effects in 2013, especially as the tax hikes and public spending cuts will not take effect until 2014. The result will be a positive impact on growth in 2013 (0.2%), although it will take longer to affect employment (+23,000 in 2013) due to the time it takes employment to adjust to activity and the gradual ramping-up of the measure.

On the other hand, the impact of the CICE will be slightly recessive from 2014 to 2016, as the loss in household purchasing power linked to higher taxes and the cuts in public spending (household consumption and public demand will contribute -0.2 GDP point in 2014 and then -0.4 point in 2015 and 2016) will prevail over lower prices and the recovery of business margins. Apart from the first year, the CICE’s positive impact on growth related to income transfers will be slow to be seen, as gains in market share related to lower prices and to higher business margins are dependent on a medium / long-term supply-side mechanism, with demand-side impacts being felt more rapidly.

The implementation of the CICE will gradually generate gains in market share that will make a positive contribution to activity by improving the foreign trade balance (0.4 GDP point in 2015 and 2016), whether through increased exports or reduced imports. From 2017, the external balance will not contribute as much to the economy (0.3 GDP point) due to the improved purchasing power of households, resulting in slowing the reduction in imports. Despite the higher margins and the improved profitability of capital, productive investment will fall off slightly due to the substitution effect between labour and capital and the negative accelerator effect related to the fall in demand.

With the decline in the cost of labour relative to the cost of capital, the substitution of labour for capital will gradually boost employment to the detriment of investment, which will lead to job-rich GDP improvements and to lower gains in productivity. This dynamic will result in steady gains in employment despite the slight fall-off in activity between 2014 and 2016. Due to the rise in employment and the fall in unemployment, but also to possible wage compensation measures in companies arising from the greater fiscal pressure on households, wages will regain part of their lost purchasing power based on an increase in real pay. This catch-up in purchasing power will help to generate growth, but will limit the impact on employment and productivity gains.




Revising the multipliers and revising the forecasts – From talk to action?

By Bruno Ducoudré

Following on the heels of the IMF and the European Commission (EC), the OECD has also recently made a downward revision in its forecast for GDP growth in the euro zone in 2012 (-0.4%, against -0.1% in April 2012) and in 2013 (0.1%, against 0.9% in April 2012). In its latest forecasting exercise, the OECD says it now shares with the other international institutions (the IMF [i] and EC [ii]) the idea that the multipliers are currently high in the euro zone [iii]: the simultaneous implementation of fiscal austerity throughout the euro zone while the economy is already in trouble, combined with a European Central Bank that has very little leeway to cut its key interest rate further, is increasing the impact of the ongoing fiscal consolidation on economic activity.

The revision of the positioning of the three institutions poses two questions:

  • – What are the main factors leading to the revision of the growth forecasts? Given the scale of the austerity measures being enacted in the euro zone, we can expect that the revised forecast of the fiscal impulses is a major determinant of the revisions to the growth forecasts. These revisions are, for example, the main factor explaining the OFCE’s revisions to its growth forecasts for France in 2012.
  • – Is this change in discourse concretely reflected in an upward revision of the multipliers used in the forecasting exercises? These institutions do not generally specify the size of the multipliers used in their forecasting. An analysis of the revisions to the forecasts for the euro zone in 2012 and 2013 can, however, tell us the extent to which the multipliers have been revised upwards.

The following graph shows that between the forecast made in April of year N-1 for the euro zone and the latest available forecast for year N, the three institutions have revised their forecast sharply downward, by ‑2.3 points on average in 2012 and -0.9 point on average in 2013.

At the same time, the fiscal impulses have also been revised, from -0.6 GDP point for the OECD to -0.8 GDP point for the IMF for 2012, and by 0.8 point for the Commission to +0.2 point for the OECD in 2013, which explains some of the revisions in growth for these two years.

Comparatively speaking, for 2012 the OFCE is the institute that revised its growth forecast the least, but which changed its forecast for the fiscal impulse the most (-1.7 GDP points forecast in October 2012, against the forecast of -0.5 GDP point in April 2011, a revision of -1.2 points). In contrast, for 2013 the revision in the growth forecast is similar for all the institutions, but the revisions of the impulses are very different. These differences may thus arise in part from the revision of the multipliers.

 

The revisions of the growth forecasts ğ can be broken down into several terms:

  • – A revision in the fiscal impulse IB, denoted ΔIB;
  • – A revision in the multiplier k, denoted Δkk0 being the initial multiplier and k1 the revised multiplier;
  • – A revision of the spontaneous growth in the euro zone (excluding the impact of fiscal policy), of fiscal impulses outside the euro zone, etc.: Δe

The revision of the OFCE forecast by -1.5 points for 2012 that took place between April 2011 and October 2012 breaks down as follows: ‑1.3 points from the revision of the fiscal impulses, and ‑0.3 point from the upward revision of the multiplier (table). The sum of the effects of the other sources of revision adds 0.1 percentage point growth in 2012 compared with the forecast made in April 2011. In contrast, the revision for 2013 is due mainly to the increase in the size of the multiplier.

As for the international institutions, these elements (size of the multiplier, spontaneous growth, etc.) are not all known to us, except for the fiscal impulses. There are a number of polar cases that can be used to infer an interval for the multipliers used in the forecasting. In addition, if it is mainly revisions of the fiscal impulse and revisions of the size of the multiplier that are the source of the revision of the growth forecasts, as a first approximation it can be assumed that Δe = 0. We can then calculate the implied multiplier for the case that the entirety of the revision is attributed to the revision of the fiscal impulses, and for the case that the revision is divided between the revision of the multiplier and the revision of the impulse.

Attributing the entirety of the revisions of the forecasts for 2012 to the revision of the impulses would imply very high initial multipliers, on the order of 2.5 for the IMF to 4.3 for the OECD (Table), which is not consistent with the IMF analysis (which evaluates the current multiplier at between 0.9 and 1.7). On the other hand, the order of magnitude of the inferred multipliers for the IMF (1.4) and the Commission (1.1) for the year 2013 seems closer to the current consensus, if we look at the current literature on the size of the multipliers.

The hypothesis could also be made that in the recent past the Commission, the OECD and the IMF based themselves on multipliers derived from DSGE models, which are generally low, on the order of 0.5 [1]. Adopting this value for the first forecasting exercise (April 2011 for the year 2012 and April 2012 for 2013), we can calculate an implicit multiplier such that the entirety of the revisions breaks down between the revision of the impulse and the revision of the multiplier. This multiplier would then be between 2.8 (OECD) and 3.6 (EC) for the year 2012, and between 1.3 (OECD and IMF) and 2.8 (EC) for 2013.

The revisions of the forecast for 2012 are not primarily drawn from a joint revision of the fiscal impulses and the size of the multipliers. A significant proportion of the revisions for growth also comes from a downward revision for spontaneous growth. Suppose now that the final multiplier is worth 1.3 (the average across the range estimated by the IMF); the revision of the spontaneous growth in the euro zone then accounts for more than 50% of the revision in the forecast for the euro zone in 2012, which reflects the optimistic bias common to the Commission, the OECD and the IMF. In comparison, the revision of spontaneous growth accounts for less than 10% of the revision in the OFCE forecast for 2012.

On the other hand, the size of the multipliers inferred from the revisions of the forecasts for 2013 appears to accord with the range calculated by the IMF – on the order of 1.1 for the Commission, 1.3 for the OECD and 1.3 to 1.4 for the IMF. The revisions of the growth forecasts for 2013 can therefore be explained mainly by the revision of the fiscal impulses planned and the increase in the multipliers used. In this sense, the controversy over the size of the multipliers is indeed reflected in an increase in the size of the multipliers used in the forecasting of the major international institutions.


[1] See, for example, European Commission (2012): “Report on public finances in EMU”, European Economy no. 2012/4. More precisely, the multiplier from the QUEST model of the European Commission is equivalent to 1 the first year for a permanent shock to public investment or civil servant pay, 0.5 for other public expenditure, and less than 0.4 for taxes and transfers.


[i] See, for example, page 41 of the World Economic Outlook of the IMF from October 2012: “The main finding … is that the multipliers used in generating growth forecasts have been systematically too low since the start of the Great Recession, by 0.4 to 1.2, depending on the forecast source and the specifics of the estimation approach. Informal evidence suggests that the multipliers implicitly used to generate these forecasts are about 0.5. So actual multipliers may be higher, in the range of 0.9 to 1.7.”

[ii] See, for example, page 115 of the European Commission’s Report on Public finances in EMU: “In addition, there is a growing understanding that fiscal multipliers are non-linear and become larger in crisis periods because of the increase in aggregate uncertainty about aggregate demand and credit conditions, which therefore cannot be insured by any economic agent, of the presence of slack in the economy, of the larger share of consumers that are liquidity constrained, and of the more accommodative stance of monetary policy. Recent empirical works on US, Italy, Germany and France confirm this finding. It is thus reasonable to assume that in the present juncture, with most of the developed economies undergoing consolidations, and in the presence of tensions in the financial markets and high uncertainty, the multipliers for composition-balanced permanent consolidations are higher than normal.”

[iii] See, for example, page 20 of the OECD Economic Outlook from November 2012: “The size of the drag reflects the spillovers that arise from simultaneous consolidation in many countries, especially in the euro area, increasing standard fiscal multipliers by around a third according to model simulations, and the limited scope for monetary policy to react, possibly increasing the multipliers by an additional one-third.”