Inequality and macroeconomic models

By Stéphane Auray and Aurélien Eyquem

“All models are wrong, some are useful.” This quote from George Box has often been used to justify the simplistic assumptions made in macroeconomic models. One of these has long been criticised: the fact that the behaviour of households, although differing (heterogeneous) in their individual characteristics (age, profession, gender, income, wealth, state of health, labour market status), can be approximated at the macroeconomic level by that of a so-called “representative” agent. This assumption of a representative agent means considering that the heterogeneity of agents and the resulting inequalities are of little importance for aggregate fluctuations.



Economists are not blind – they are well aware that households, companies and banks are not all identical. Many studies have looked at the effects of household heterogeneity on aggregate savings and, consequently, on macroeconomic fluctuations[1]. On the other hand, some studies propose so-called “overlapping generations” models in which age plays an important role[2].

Most often, households in these models move from one state to another (from employment to unemployment, from one level of skills and therefore of income to another, from one age to another) and the probabilities of a transition are known. In the absence of insurance mechanisms (unemployment, redistribution, health), the expected risk of a transition produces an expected risk of income or health, which leads agents to save in order to insure themselves. Furthermore, differences in savings and consumption behaviour are also likely to lead to differences in labour supply behaviour. Finally, changes in the macroeconomic environment (changes in the unemployment rate, interest rates, wages, taxes and contributions, public spending, insurance schemes) potentially affect these individual probabilities and the resulting microeconomic behaviour. Aggregate risks therefore affect each household differently, depending on its characteristics, generating general equilibrium and redistributive effects. However, this relatively old work has come up against two obstacles.

The first is technical: tracking the evolution of the distribution of agents over time is mathematically complex. It is of course possible to reduce the extent of the heterogeneity by limiting ourselves to two agents (or two types of agent): those with access to the financial markets and those who are forced to consume their income at each period[3], working people and pensioners, etc. But while these simplified models make it possible to understand and validate broad intuitions, they are still limited, particularly from an empirical point of view. They do not, for example, allow us to carry out a realistic study of changes in inequality across the entire distribution of income or wealth.

The second obstacle is more profound: several of these studies have concluded that models with heterogeneous agents, although much more complex to manipulate, did not perform significantly better than models with representative agents in terms of aggregate macroeconomic validation (Krusell and Smith, 1998). Admittedly, they were not aiming to study changes in inequality or the macroeconomic impact, but rather the contribution of agent heterogeneity to aggregate dynamics. In fact, the subject of inequality has long been considered to be almost or fully orthogonal to macroeconomic analysis (at least when considering fluctuations) and to fall more within the remit of labour economics, microeconomics or collective choice theory. As a result, heterogeneous agent models have long suffered from the image of being an unnecessarily complex subject in the macroeconomic analysis of fluctuations.

In recent years, these models have undergone an exceptional revival, to the point where they seem to be becoming the standard for macroeconomic analysis. The first obstacle has been overcome by an exponential increase in the computing power used to solve and simulate these models, combined with the development of powerful mathematical tools that render their solution easier (Achdou et al., 2022). The second obstacle has been overcome by the three-pronged movement that we describe below: the growing body of work (particularly empirical work) demonstrating the importance of income and wealth inequalities for issues typically addressed by macroeconomics – over and above their intrinsic interest; the development of tools for measuring inequalities that make it possible to reconcile them with macroeconomic analysis; and the refinement of the assumptions made in models with heterogeneous agents.

First, numerous empirical studies show that precautionary savings plays a major role in macroeconomic fluctuations (Gourinchas and Parker, 2001). But precautionary savings and the sensitivity of savings (and household spending) to income are not identical for all households. Indeed, empirical work suggests that the aggregate marginal propensity to consume (MPC) lies between 15% and 25% (Jappelli and Pistaferri, 2010), and that the MPC of a large proportion of the population is higher than the MPC obtained in representative agent models. In representative agent models at the top of the wealth distribution, the latter is approximately equal to the real interest rate, and therefore much lower than the empirical estimates (see Kaplan and Violante, 2022). It is therefore critical to understand the origin of a high aggregate MPC based on solid microeconomic foundations, particularly if we wish to carry out a realistic study of the impact of macroeconomic policies (monetary, fiscal, etc.) that rely on multiplier effects linked to the distribution of MPCs.

In recent years, an abundant and increasingly well-developed empirical literature has been dealing with issues relating to income inequality. Following the seminal article by Atkinson (1970) along with more recent developments[4], we now have long data series that measure income inequality before and after tax, along with wealth inequality, across the entire household distribution for a large number of countries. Finally, what are known as Distributional National Accounts make it possible to compare in great detail the predictions of macroeconomic models using heterogeneous agents with microeconomic data that are totally consistent with the framework of macroeconomic analysis.

Finally, the heterogeneous agent models themselves have evolved. The “first generation” models generally considered a single asset (physical capital, in other words, company shares) and prevented agents from taking on debt, which led them to save for precautionary reasons. These hypotheses were not able to explain why MPCs were high. They failed to  correctly replicate the observed distribution of income and, above all, of wealth. In reality, households have access to several assets (liquid savings, housing, equities), and the composition of their wealth differs greatly depending on the level of wealth: households generally start saving in liquid form, then invest their savings in property by taking out bank loans, and finally diversify their savings (only for those with the greatest wealth, above the 60th percentile of the wealth distribution) by buying shares (Auray, Eyquem, Goupille-Lebret and Garbinti, 2023). In doing so, a large proportion of the population ends up in debt in order to build up their property wealth, which is thus not very liquid. Although they have high incomes, many households consume almost all their income, which reduces their capacity for self-insurance through savings. This increases their MPC (and therefore the aggregate MPC) in line with empirical observations (Kaplan, Violante and Weidner, 2014).

Macroeconomists can now fully integrate the analysis of inequalities in income, wealth and health into models based on more realistic microeconomic behaviour. They can re-examine the consensus reached on the conduct of monetary[5] or fiscal[6] policies and examine their redistributive effects. They are also in a position to quantify the aggregate and redistributive effects of trade or environmental policies, which are or will be at the heart of their political acceptability – giving rise to new horizons for less wrong, more useful models.

[1] See in particular Bewley (1977), Campbell and Mankiw (1991), Aiyagari (1994), Krusell and Smith (1998), Castaneda, Diaz-Gimenez and Rios-Rull (1998).

[2] See the work of Allais (1947) and Samuelson (1958), and among others De Nardi (2004).

[3] See Campbell and Mankiw (1989) ; Bilbiie and Straub (2004) ; Gali, Lopez-Salido and Valles (2007).

[4] See (2001, 2003), Piketty and Saez (2003, 2006), Atkinson, Piketty and Saez (2011), Piketty, Saez and Zucman (2018) and Alvaredo et al. (2020).

[5] Kaplan, Moll and Violante (2018); Auclert (2019); Le Grand, Martin-Baillon and Ragot (2023).

[6] Heathcote (2005); Le Grand and Ragot (2022); Bayer, Born and Luetticke (2020).   




Jean-Paul Fitoussi, brilliant economist and public intellectual, by Xavier Ragot

Born on 19 August 1942 in La Goulette (Tunisia), died on 15 April 2022 in Paris

The economist Jean-Paul Fitoussi passed away on 15 April in Paris.  He began his career as a professor at the University of Strasbourg and then at the European University Institute in Florence, before joining Sciences Po and becoming President of the Observatoire Français des Conjonctures Économiques (OFCE) from 1989 to 2010. Officer of the Legion of Honour and Doctor Honoris Causa at many universities, Jean-Paul Fitoussi’s work has been recognised by numerous international prizes. He has contributed to institutions throughout France and Italy, where he also taught and where he commanded widespread respect.



Jean-Paul Fitoussi was a great economist but also a public intellectual. He understood that our economies generate serious instabilities. High inflation in the 1970s, mass unemployment in the 1980s, high interest rates in the 1990s due to convergence on the euro, the financial crisis of 2008, the Covid pandemic, and then the current geopolitical and energy crisis: economic instability is the norm, hitting the most vulnerable, and public intervention must be a constant. Capitalism is not a stable system where the only things politicians change are technical parameters, such as, for example, taxes or the configuration of the pension system. It requires constant intervention through fiscal and monetary policy, adapting policy instruments again and again. His most recent reflections concerned how the rise in inflation and energy prices since the invasion of Ukraine would impact the poorest households. How can energy dependency be reduced without penalising the most vulnerable?

Jean-Paul Fitoussi was able to draw out the implications for European construction. Economic governance cannot be built by means of economic rules: the criteria of a 3% public deficit and 60% public debt, in addition to being arbitrary, distract from the imbalances that are accumulating outside the State budget. What is needed is not uniform rules but a place for debate to identify imbalances and anticipate future crises, a forum for European sovereignty. For Jean-Paul Fitoussi, the role of European sovereignty is not to fuel confrontation but to ensure coordination and management of the economic exception.

Yet the aim of this economic coordination cannot be to maximise growth without concern for inequality or sustainability, but about contributing to the common good. Here the intellectual strength of Jean-Paul Fitoussi meets the modesty of the economist. It is not for the economist to decide what an economy means for society but for democracy to show the desirable futures. Jean-Paul Fitoussi’s contributions have therefore focused on the definition and measurement of well-being. As part of the Stiglitz-Sen-Fitoussi Commission, he has contributed since 2009 to broadening the measures of economic progress beyond GDP growth alone.

But Jean-Paul Fitoussi was also someone who builds, and he was concerned with participating in the life of the city.  He became President of the OFCE in 1989 and directed the Institute for 20 years, establishing the OFCE as an internationally recognised centre. All those who worked with him can testify to his kindness, his attention, and his sense of humour. His concern for others was no mere intellectual attitude. For 20 years he was Secretary General of the International Economic Science Association, participating in international reflections with Arrow, Sen, Phelps, Solow, all Nobel Prize winners – and his friends.

Finally, Jean-Paul Fitoussi was a great architect of Sciences-Po and contributed to developing the institution in many ways. He helped to open it up socially and to create the economics department. The relevance of his ideas and his sense of pedagogy have given him a special place in the public debate. Consulted by one government after another, he was never stingy with his time to explain economic policy issues, with students as well as Presidents of the Republic.

Jean-Paul Fitoussi leaves us at a time when we are most in need of his thinking. Because of his conception of the role of the economist in the city, his attention to crises and to the economic difficulties of society’s most vulnerable, Jean-Paul Fitoussi can be described as Keynesian. This is both accurate but reductive. We need to broaden the focus and present him better: an honest man and a great economist.

Xavier Ragot




The transmission of monetary policy: The constraints on real estate loans are significant!

By Fergus Cumming (Bank
of England) and Paul Hubert (Sciences Po – OFCE, France)

Does the transmission
of monetary policy depend on the state of consumers’ debt? In this post, we
show that changes in interest rates have a greater impact when a large share of
households face financial constraints, i.e. when households are close to their
borrowing limits. We also find that the overall impact of monetary policy
depends in part on the dynamics of real estate prices and may not be
symmetrical for increases and decreases in interest rates.



From
the micro to the macro

In a recent
article
, we use home loan
data from the United Kingdom to build a detailed measure of the proportion of
households that are close to their borrowing limits based on the ratio of mortgage
levels to incomes. This mortgage data allows us to obtain a clear picture of the
various factors that motivated people’s decisions about real estate loans
between 2005 and 2017. After eliminating effects due to regulation, bank
behaviour, geography and other macroeconomic developments, we estimate the
relative share of highly indebted households to build a measure that can be
compared over time. To do this, we combine the information gathered for 11
million mortgages into a single time series, thus allowing us to explore the
issue of the transmission of monetary policy.

We use the time
variation in this debt variable to explore whether and how the effects of
monetary policy depend on the share of people who are financially constrained. We
focus on the response of consumption in particular. Intuitively, we know that a
restrictive monetary policy leads to a decline in consumption in the short to
medium term, which is why central banks raise interest rates when the economy
is overheating. The point is to understand whether this result changes
according to the share of households that are financially constrained.

Monetary
policy contingent on credit constraints

We find that monetary
policy is more effective when a large portion of households have taken on high levels
of debt. In the graph below, we show how the consumption of non-durable goods, durable
goods and total goods responds to raising the key interest rate by one
percentage point. The grey bands (or blue, respectively) represent the response
of consumption when there is a large (small) proportion of people close to
their borrowing limits. The differences between the blue and grey bands suggest
that monetary policy has greater strength when the share of heavily indebted households
is high.

It is likely that there are at least two mechanisms behind this differentiated effect: first, in an economy where the rates are partly variable[1], when the amount borrowed by households increases relative to their income, the mechanical effect of monetary policy on disposable income is amplified. People with large loans are penalized by the increase in their monthly loan payments in the event of a rate hike, which reduces their purchasing power and thus their consumption! As a result, the greater the share of heavily indebted agents, the greater the aggregate impact on consumption. Second, households close to their borrowing limits are likely to spend a greater proportion of their income (they have a higher marginal propensity to consume). Put another way, the greater the portion of your income you have to spend on paying down your debt, the more your consumption depends on your income. The change in income related to monetary policy will then have a greater impact on your consumption. Interestingly, we find that our results are due more to the distribution of highly indebted households than to an overall increase in borrowing.

Our results also
indicate some asymmetry in the transmission of monetary policy. When the share
of constrained households is large, interest rate increases have a greater
impact (in absolute terms) than interest rate cuts. This is not completely surprising.
When your income comes very close to your spending, running out of money is very
different from receiving a small additional windfall.

Our results also
suggest that changes in real estate prices have significant effects. When house
prices rise, homeowners feel richer and are able to refinance their loans more
easily in order to free up funds for other spending. This may offset some of
the amortization effects of an interest rate rise. On the other hand, when
house prices fall, an interest rate hike exacerbates the contractionary impact on
the economy, rendering monetary policy very powerful.

Implications
for economic policy

We show that the state
of consumers’ debt may account for some of the change in the effectiveness of
monetary policy during the economic cycle. However, it should be kept in mind
that macro-prudential policy makers can influence the distribution of debt in
the economy. Our results thus suggest that there is a strong interaction
between monetary policy and macro-prudential policy.


[1]
Which is the case in the United Kingdom.




The OFCE optimistic about growth – “As usual”?

By Magali Dauvin and Hervé Péléraux

In the spring of 2019, the OFCE forecast real GDP growth of 1.5% for 2019 and 1.4% for 2020 (i.e. cumulative growth of 2.9%). At the same time, the average forecast for the two years compiled by Consensus Forecasts[1] was 1.3% each year (i.e. 2.6% cumulative), with a standard deviation around the average of 0.2 points. This difference has led some observers to describe the OFCE forecasts as “optimistic as usual”, with the forecasts of the Consensus or institutes with less favourable projections being considered more “realistic” in the current economic cycle.

A growth forecast is the result of a research exercise and is based on an assessment of general trends in the economy together with the impact of economic policies (including budget, fiscal and monetary policies) and exogenous shocks (such as changes in oil prices, social disturbances, poor weather, geopolitical tensions, etc.). These evaluations are themselves based on econometric estimations of the behaviour of economic agents that are used to quantify their response to these shocks. It is therefore difficult to comment on or compare the growth figures issued by different institutes without clearly presenting their analytical underpinnings or going into the main assumptions about the trends and mechanisms at work in the economy.

However, even if the rigour of the approach underlying the OFCE’s forecasts cannot be called into question, it is legitimate to ask whether the OFCE has indeed produced chronic overestimations in its evaluations. If such were the case, the forecasts published in spring 2019 would be tainted by an optimistic bias that needs to be tempered, and the OFCE should readjust its tools to a new context in order to regain precision in its forecasts.

No systematic overestimation

Figure 1 shows the cumulative forecasts of French GDP by the OFCE for the current year and the following year and then compares these with the cumulative results of the national accounts for the two years. In light of these results, it can be seen that the OFCE’s forecasts do not suffer from a systematic bias of optimism. For the forecasts conducted in 2016 and 2017, the growth measured by the national accounts is higher than that anticipated by the OFCE, which, while revealing an error in forecasting, does not constitute an overly optimistic view of the recovery.

The opposite can be seen in the forecasts in 2015 for 2015 and 2016; the favourable impact of the oil counter-shock and of the euro’s depreciation against the dollar during the second half of 2014 was indeed slower to materialize than the OFCE expected. The year 2016 was also marked by one-off factors such as spring floods, strikes in refineries, the tense environment created by the wave of terrorist attacks and the announcement that certain tax depreciation allowances for industrial investments would end.

In general, there is no systematic overestimation of growth by the OFCE, although some periods are worth noting, such as the years 2007 and 2008 when the negative repercussions of the financial crisis on real activity were not anticipated by our models during four consecutive forecasts. Ultimately, of the 38 forecasts conducted since March 1999, 16 show an overestimate, or 40% of the total, with the others resulting in an underestimation of growth.

Graphe_post7-6-2019_ENG

Forecasts relatively in line with the final accounts

Furthermore, the accuracy of the forecasts should not be evaluated solely in relation to the provisional national accounts, as INSEE’s initial estimates are based on a partial knowledge of the real economic situation. They are revised as and when the annual accounts and tax and social information updates are constructed, which leads to a final, and therefore definitive, version of the accounts two-and-a-half years after the end of the year[2].

Table 1 compares the forecasts made by the OFCE and the participating institutions in the spring of each year for the current year and assesses their respective errors first vis-à-vis the provisional accounts and then vis-à-vis the revised accounts. On average since 1999, the OFCE’s forecasts have overestimated the provisional accounts by 0.25 points. The forecasts from the Consensus appear more precise, with an error of 0.15 point vis-à-vis the provisional accounts. On the other hand, compared to the definitive accounts, the OFCE’s forecasts appear to be right on target (the overestimation disappears), while those from the Consensus ultimately underestimate growth by an average of 0.1 points.

Statistical analysis conducted over a long period thus shows that, while there is room for improvement, the OFCE’s forecasts are not affected by an overestimation bias when assessing their accuracy with respect to the final accounts.

Tabe_post7-6-2019_ENG

 

[1] The Consensus Forecast is a publication of Consensus Economics that compiles the forecasts of the world’s leading forecasters on a large number of economic variables in about 100 countries. About 20 institutes participate for France.

[2] At the end of January 2019, the INSEE published the accounts for the 4th quarter of 2018, which provided a first assessment of growth for 2018 as a whole. At the end of May 2019, the accounts for the year 2018, calculated based on the provisional annual accounts published mid-May 2019, were revised a first time. A new revision of the 2018 accounts will take place in May 2020, and then a final one in 2021 with the publication of the definitive accounts. For more details on the National Accounts revision process, see Péléraux H., « Comptes nationaux : du provisoire qui ne dure pas », [The national accounts : provisional accounts that don’t last], Blog de l’OFCE, 28 June 2018.

 




The dilemmas of immaterial capitalism

By Sarah Guillou

A review of: Jonathan Haskel and Stian Westlake, Capitalism Without Capital. The Rise of the Intangible Economy, Princeton University Press, 2017, 288 pp.

This book is at the crossroads of the debate about the nature of current and future growth. The increasing role of intangible assets is indeed at the heart of questions about productivity gains, the jobs of tomorrow, rising inequality, corporate taxation and the source of future incomes.

This is not simply the umpteenth book on the new economy or on future technological breakthroughs, but more fundamentally a book on the rupture being made by modes of production that are less and less based on fixed, or material, capital and increasingly on intangible assets. The digressions on an immaterial society are not new; rather, the value of the book is that it gives this real economic content and synthesizes all the research showing the economic upheavals arising from the increasing role of this type of capital.

Jonathan Haskel and Stian Westlake describe the changes brought about by the growth in the share of immaterial assets in the 21st century economy, including in terms of the measurement of growth, the dynamics of inequality, and the ways in which companies are run, the economy is financed and public growth policies are set. While the authors do not set themselves the goal of building a new theory of value, they nevertheless provide evidence that it does need to be reconstructed. This is based in particular on the construction of a database – INTAN-invest – as part of a programme financed by the European Commission and initiated by the American studies of Corrado, Hulten and Sichel (2005, 2009).

By immaterial assets is meant the immaterial elements of an economic activity that generate value over more than one period: a trademark, a patent, a copyright, a design, a mode of organization or production, a manufacturing process, a computer program or algorithm that creates information, but also a reputation or a marketing innovation, or even the quality and / or the specific features of staff training. These are assets that must positively increase a company’s balance sheet; they can depreciate with time; and they result from the consumption of resources and therefore from immaterial or intangible investment. There is a broad consensus on the importance of these assets in explaining the prices of the goods and services we consume and in determining the non-price competitiveness of products. These assets are determining elements of “added value”.

However, despite this consensus, the measurement of intangible assets is far from commensurate with their importance. Yet measuring assets improperly leads to many statistical distortions, with respect to: first, the measurement of growth – because investments increase GDP – second, the measurement of productivity – because capital and added value are poorly measured – and finally, to profits and perhaps also the distribution of added value if intangible capital is included in expenditure and not in investment. The authors show in particular that the increasing importance of intangible assets can explain the four arguments underpinning secular stagnation. First, the slowdown in productivity could be the result of an incorrect valuation of intangible added value. Furthermore, the gap between the profits of companies and their book value could be explained by an incomplete accounting of intangible assets that underestimates capital, in addition to the slowdown in investment despite very low interest rates. Finally, the increase in the inequalities in productivity and profits between firms is the result of the characteristics of intangible assets, which polarize profits and are associated with significant returns to scale.

Awareness of the measurement problem is not recent. The authors recall the major events that brought the experts together to deal with the measurement of intangible assets. They cover up to the latest reform of the systems of national accounts that enriches the GFCF of R&D, including the SNA, 2008, in particular the writing of the Frascati Manual (1963, 2015), which lays the foundations for the accounting of R&D activity. But even today it is not possible to account for all intangible assets. This is due in part to the fact that there is still some reluctance in corporate accounting with respect to integrating intangible capital insofar as it has no market price. So while it is simple to book the purchase of a patent as an asset, it is much more difficult to value the development of an algorithm within a company or to give a value to the way it is organized or to innovative manufacturing processes, or to its internal training efforts. Only when something is traded on a market does it acquire an external value that can be recorded, unhesitatingly, on the asset side of the balance sheet.

Nevertheless, the challenge in measuring this is fundamental if we believe the rest of the book. Indeed, the increasing immateriality of capital has consequences for inequalities (Chapter 6), for institutions and infrastructure (Chapter 7), for financing the economy (Chapter 8), for private governance (Chapter 9) and for public governance (Chapter 10).

The stakes here are critical because of the specific characteristics of these immaterial assets, which are summarized in the “four S’s” (Chapter 2): “scalable, sunkedness, spillovers and synergies”. This means, first, that immaterial assets have the particularity of being able to be deployed on a large production scale without depreciating (“scalable”). Second, they are associated with irrecoverable expenses, that is, once the investment has been made it is difficult for the company to consider selling the asset on a secondary market, so there is no turning back (“sunkedness”). Next, these assets have “spillovers”, or in other words, they spread beyond their owners. Finally, they combine easily by creating “synergies” that increase profitability.

These characteristics imply a modification of the functioning of capitalism, which we are all already witnessing: they give a premium to the winners, they exacerbate the differences between the holders of certain intangible assets and those who are engaged in more traditional activities, they polarize economic activity in large urban centres, and they overvalue the talents of managers capable of orchestrating synergies between immaterial assets. At the same time, the prevalence of these assets requires modified public policies. This concerns first, the protection of the property rights of these intangible assets, which are intellectual in nature and difficult to fully appropriate due to their volatility. Even though intellectual property rights have long been established, they now face two challenges: their universal character (many countries apply them only sparingly) and achieving a balance (they should not lead to creating complex barriers that render it impossible for new innovators to enter, while they should be sufficiently protective to allow the fruits of investments to be harvested). Moreover, spillover effects need to be promoted by ensuring a balance in the development of cities and the interactions between individuals, while also creating incentives to the financing of intangible investments. Bank financing, which is based on tangible guarantees, is not well suited to the new intangible economy, especially as it benefits from tax advantages by deducting interest from taxable income. It is therefore important to develop financing based on issuing shares and developing public co-financing. More generally, the public policy best suited to the intangible economy involves creating certainty, stability and confidence, in order to deal with the intrinsic uncertainty of risky intangible investments.

What emerges from this reading is a clear awareness of the need to promote the development of investment in immaterial assets, but also a demonstration that the growing immateriality of capital is giving rise to forces driving inequality. This duality can prove problematic.

More specifically, three dilemmas are identified. The first concerns the way intangible investments are financed. The highly risky nature of intangible investments – because they are irrecoverable, collateral-free and with an uncertain return – calls for investors to take advantage of diversification and dispersal. And yet, as the authors show, what companies in this new economy need are investors who hold large, stable blocks of shares so as to be engaged in the company’s project. The second dilemma concerns state support. It is justified because these have a social return that goes beyond their private return and, in the face of shortfalls in private financing, public financing is necessary. However, corporate taxation has not yet adapted to this new sources of wealth creation, and states face growing difficulties in raising taxes and identifying the taxable base. Furthermore, states are competing to attract businesses into the new economy through fiscal expenditures and subsidies. The third dilemma is undoubtedly the most fundamental. This involves the contradiction between inequalities, whether in the labour market (job polarization [1]), in the goods market (concentration) or geographically (geographical polarization), which are caused by the rise of intangible capital, on the one hand, and on the other hand the need for strong social cohesion, trustworthiness and human urban centres that provide favourable terrain for the development of the synergies and exchanges that nourish intangible assets. In other words, the inequalities created affect the social capital, which is detrimental to the future development of intangible assets.

It is in the resolution of these dilemmas that this new capitalism will be able to be in accord with our democracies.

 

[1] See Gregory Verdugo: “The new labour inequalities. Why jobs are polarizing”, OFCE blog.

 




Which new path for raising labour productivity?

By Bruno Ducoudré and Eric Heyer

The industrialized countries are experiencing what seems to be a persistent slowdown in the growth of labour productivity since the second oil shock. This has been the subject of a great deal of analysis in the economic literature[1] that considers the possible disappearance of the growth potential of the developed economies, and consequently their inability to return to a level of activity in line with their pre-crisis trajectories. In other words, could the industrialized countries have entered a phase of “secular stagnation”, making it more difficult to reduce public and private debt? The exhaustion of gains in productivity would also modify any diagnosis made of their conjunctural situation, particularly as regards their labour markets.

Trend productivity gains are inherently unobservable; it is therefore necessary to decompose observed productivity into a trend component and a cyclical component that is linked to the more or less rapid adjustment of employment to changes in economic activity (the productivity cycle). In a recent study published in the Revue de l’OFCE, we seek to highlight the slowdown in trend productivity gains and the productivity cycle in six major developed countries (Germany, Spain, the United States, France, Italy and the United Kingdom) using an econometric method – the Kalman filter – so as to allow the estimation of an equation for labour demand based on explicit theoretical underpinnings and the estimation of trend productivity gains.

After reviewing the various possible explanations for the slowdown described in the economic literature, we present the theoretical modelling of the equation for labour demand and our strategy for an empirical estimation. This equation, derived from a CES-type production function[2], is based on the assumption of maximizing the profit of firms in monopolistic competition and on the assumption of a stable long-term capital-to-output ratio. This makes it possible to break down the trend and cyclical components in a single step, but makes productivity gains depend solely on labour[3].

The existing empirical studies usually rely on a log-linear estimate of the productivity trend and introduce fixed-date trend breaks[4]. We propose an alternative method that consists of writing the employment equation in the form of a state-space model representing the underlying productivity trend. This model has the advantage of allowing a less bumpy depiction of trend productivity gains since it doesn’t rely on ad-hoc break dates.

We then evaluate the new growth path for labour productivity and the productivity cycle for the six countries considered. Our results confirm the slowdown in trend productivity gains (Figure 1).

IMG1_post02-02_ENG

The growth rate for trend productivity for five countries (France, Germany, Italy, the United States and the United Kingdom) shows a slow decline since the 1990s. Trend productivity, estimated at 1.5% in the United States in the 1980s, increased during the 1990s with the wave of new technologies, then gradually decreased to 0.9% at the end of the period. For France, Italy and Germany, the catch-up stopped during the 1990s (during the 2000s for Spain), even though the slowdown in trend productivity gains was interrupted briefly between the mid-1990s and the early 2000s. Leaving aside Italy, whose estimated trend productivity gains were zero at the end of the period, the trend growth rates converged in a range of between 0.8% and 1% in annual trend productivity gains.

The estimated productivity cycles are shown in Figure 2. They show the greatest fluctuations for France, Italy and Germany and the United Kingdom. A calculation of the average times for the adjustment of employment to demand indicates an adjustment period of 4 to 5 quarters for these countries. The cycle fluctuates much less for the United States and Spain, indicating that the speed of adjustment of employment to economic activity is faster for these two countries, which is confirmed by the average time of adjustment to demand (respectively 2 and 3 quarters). Finally, the estimates indicate globally that the productivity cycle will have closed for each of the countries considered in the second quarter of 2017.

IMG2_post02-02_ENG

[1] See, for example, A. Bergeaud, G. Cette and R. Lecat, 2016, “Productivity Trends in Advanced Countries between 1890 and 2012”, The Review of Income Wealth, (62: 420-444) and N. Crafts and K. H. O’Rourke, 2013, “Twentieth Century Growth”, CEPR Discussion Papers.

[2] See C. Allard-Prigent, C. Audenis, K. Berger, N. Carnot, S. Duchêne and F. Pesin, 2002, “Présentation du modèle MESANGE”, French Ministère de l’Economie, des finances et de l’industrie, Forecasting Department, MINEFI, Working document.

[3] The equation for labour demand is based on a production function and an assumption of neutral technical progress in Harrod’s sense.

[4] See M. Cochard, G. Cornilleau and E. Heyer, 2010, “Les marchés du travail dans la crise” [Labour Markets in Crisis], Économie et Statistique, (438: 181-204) and B. Ducoudré and M. Plane, 2015, “Les demandes de facteurs de production en France” [The Demand for Production Factors in France], Revue de l’OFCE (142: 21-53).




The Janus-Faced Nature of Debt

by Mattia Guerini, Alessio Moneta, Mauro Napoletano, Andrea Roventini

The financial and economic crises of 2008 have been intimately interwined with the dynamics of debt. As a matter of fact, a research by Ng and Wright (2013) reports that in the last thirty years all the U.S. recessions had financial origins.

Figure 1 shows that both U.S. corporate (green line) and mortgage (blue line) debts have been growing steadily from the sixties to the end of the century. In the 2000s, however, mortgage debt increased from around 60% to 100% of GDP in less than a decade. The situation became unsustainable in 2008 with the outburst of the subprime real asset bubble. The trend in debt changed since then. Mortgage debt declined substantially, while the U.S. public debt-to-GDP ratio (red line) skyrocketed from 60% to a level slightly above than 100% in less than 5 years, as a consequence of the Great Recession.

IMG1_post24-01_ENG

This surge in public debt has been raising concerns about the sustainability of public finances, and more generally, about the possible detrimental effects of public debt on economic growth. Some economists argued indeed that there exist a 90% threshold after which public debt harms GDP growth (see Reinhart and Rogoff, 2010). Notwithstanding a large number of empirical studies contradicting this hypothesis (see Herdon et al., 2013 and Égert, 2015 as recent prominent examples), the debate is still open (see Ash et al., 2017 and Chudik et al., 2017).

We have contributed to this debate with a new empirical analysis that jointly investigates the impact of public and private debt on U.S. GDP dynamics and that will appear on “Macroeconomic Dynamics” (see Guerini et al., 2017). Our analysis keeps the a priori theoretical assumptions as minimal as possible by exploiting new statistical techniques that identify causal structures from the data under quite general conditions. In particular, we employ a causal search algorithm based on the Independent Component Analysis (ICA) to identify the structural form of the cointegrated VAR and to solve the double causality issue.[1] This has allowed us to keep an “agnostic” perspective in the econometric analysis, avoiding restrictions on the model, thus “letting the data speak”.

The results obtained suggest that public debt shocks positively and persistently affect output (see Figure 2, left panel).[2] In particular, our results provide evidence against the hypothesis that upsurges in public debt hamper GDP growth in the U.S. In fact, increases in public debt—possibly channeled through an increase in public spending in investments—crowd-in private investments, (see Figure 2, right panel) confirming some results already brought to the fore by Stiglitz (2012). This implies that government spending and, more generally, expansionary fiscal policy spur output both in the short- and in the medium-run. In that, austerity policies do not seem to be the appropriate policy answer to overcome a crisis.

IMG2_post24-01_ENG

On the contrary, these positive effects are not fully observed when we look at the effects of private debt and in particular when we focus on mortgage debt. More specifically, we find that the positive effects of private debt shocks are milder than public debt’s ones, and they fade out over time. Furthermore, increasing the levels of mortgage debt have a negative impact on output and consumption dynamics in the medium-run (see Figure 3), while their positive effects are only temporary and relatively mild. Such a result appears to be fully consistent with the results of Mian and Sufi (2009) and Jordà et al. (2014): mortgage debt fuels real asset bubbles, but when these bubbles burst, they trigger a financial crises that visibly transmit their negative effects to the real economic system for longer periods of time.

IMG3_post24-01_ENG

Another interesting fact that emerges from our research, is that the other most important form of private debt—i.e. non-financial corporations (NFCs) debt—does not generate negative medium-run impacts. As a matter of fact (as it is possible to see in Figure 4) surges in the level of NFCs debt seems to have a positive effect both on GDP and on gross fixed capital formation, hence directly increasing the level of investments.

IMG4_post24-01_ENG

To conclude, our results suggest that debt has a Janus-faced nature: different types of debts impact differently on aggregate macroeconomic dynamics. In particular, possible threats to medium- and long-run output growth do not come from government debt (which might well be a consequence of a crisis), but rather from increasing too much the level of private one. More specifically, surges in the level of mortgage debt appear to be much more dangerous than the building up of corporate debt.

 

[1] For details about the ICA algorithm see Moneta et al. (2013); for details about its statistical properties see Gourieroux et al. (2017).

[2] When computing the Impulse Response Functions, we apply a 1 standard deviation (SD) shock to the relevant debt variable. Hence, for example, on the y-axis of Figure 2, left panel, we can read that a 1 SD shock to public debt has a 0.5% positive effect on GDP in the medium run.




Some clarifications on economic negationism

By Pierre Cahuc and André Zylberberg

We would like to thank Xavier Ragot for permitting us to respond to his comments about our book, Le Négationnisme économique [Economic Negationism]. Like many critics, Xavier Ragot considered that:

1) “The very title of the book proceeds from great violence. This book is on a slippery slope in the intellectual debate that is heading towards a caricature of debate and verbal abuse.”

2) The approach of our work is “scientistic” and “reductive”, with “faith in knowledge drawn from natural experiments” that he doesn’t believe has a “consensus in economics”.

3) We “want to import the hierarchy of academic debate into the public debate”.

We would like to respond to these three allegations, with which we disagree. 

1) On economic negationism

The term “economic negationism” does not caricature the debate. We chose it because the notion of “scientific negationism” is an expression used in debates about science, and we are talking about science here. This term is in common use, for instance on the scientific blog of the newspaper Le Monde, “Passeurs de Sciences”, which was named the best blog in the field of science. Our work reviews the significance of the term in the introduction, and then further develops this in Chapter 7. We note that scientific negationism is a strategy based on four pillars:

  • Throw doubt on and castigate “la pensée unique” [doctrinaire, dogmatic “group think”];
  • Denounce moneyed and ideological interests;
  • Condemn science because it can’t explain everything;
  • Promote “alternative” learned societies.

This strategy aims to discredit researchers who are getting what are considered disturbing results. It affects all disciplines to one extent or another, as is shown by the works of Robert Proctor[1] and Naomi Oreské and Erik Conway[2]. And this is precisely the strategy adopted both by the Economistes Atterrés[3] and in the book entitled A quoi servent les économistes s’ils disent tous la même chose [What good are economists if they all say the same thing][4]. These texts all rely on the four pillars of scientific negationism set out above. They loudly proclaim the existence of dogmatic “group think” (pillar 1), which more or less accedes to the demands of the financial markets (pillar 2), and is thus unable to foresee financial crises (Pillar 3), resulting in the need to create alternative learned societies (and while the AFEP, the French association of political economists, already exists, there are demands to open a new economics section in the University) (pillar 4).

This strategy does not nourish debate. It annihilates it. It is intended solely to discredit researchers, both recognized and anonymous. Jean Tirole was recently the victim of this kind of discrediting by some self-proclaimed “heterodox” economists.

2) With regard to a scientistic and reductive approach

Xavier Ragot says that “giving a consensus among economists the status of truth” (Cahuc, Zylberberg, p. 185) is troublesome, because it ignores the contributions of “minority” efforts. We are not erecting some consensus about truth; rather, we say very specifically (p. 185) that a consensus, when it exists, is the best approximation of the “truth”. The use of quotation marks around the word truth and the qualification best approximation show clearly that we are not advocating some notion of scientistic absolutism. Our use of the terms consensus and truth seems to us to correspond to the usual practice in the scientific process.

To bolster our position on this point, we’d like to cite our book once more, on pages 184-185: “Trusting in a community made up of thousands of researchers remains the best option for having an informed opinion about subjects that we don’t really understand. It is nevertheless a form of betting, because even if science is the most reliable way to produce knowledge, it may be wrong. But to systematically call into question the results obtained by scientific specialists on a given question and prefer to rely on self-proclaimed experts is far riskier”; and on page 186: “The development of knowledge involves a collective undertaking where every researcher produces results that other researchers then test for their robustness. ‘Scientific knowledge’ is the photograph of this collective endeavour at a given point. This is the most reliable picture of what we know about the state of the world. This image is not fixed, but is in fact constantly changing.”

So when no empirical study on the reduction of statutory or contractual working hours (excluding the reduction of charges) finds a positive effect on employment, there are no grounds for asserting that reducing working time can create jobs … so long as no published studies find the opposite. Economic negationism leads to denying these results, saying that they stem from dogmatic thinking guided by either ignorance of the real world or a conspiracy. We affirm therefore that further debate is necessary, but to be constructive it must follow certain rules: the arguments must be based on contributions that have passed “peer review” to be certified as relevant. Of course, on many topics the existing studies do not make it possible to identify convergent results. When this is the case, it has to be acknowledged. There are several illustrations of this in our book.

3) On our recommendations for opening up debate and making it transparent

As we have mentioned before, our objective is not to close the “intellectual debate” to public access by laypeople, but to make the debate more constructive and informative. Debates on economics, even when simply presenting the facts, are often treated as political confrontations or boxing matches between different schools of thought. We’re simply saying that to organize informative discussion (page 209), “Journalists should stop systematically calling on the same people, especially when they have no proven research activity but are nevertheless capable of expressing themselves on every subject. They should instead seek out genuine specialists. The ranking of more than 800 economists in France on the IDEAS website can help them select relevant speakers. In any case, the web pages of researchers should be consulted to ensure that their publications appear in reputable scientific journals, a list of which is available on the same IDEAS site. If an economist hasn’t published anything in the last five years in one of the 1,700 journals listed on this site, it is clear that this person has not been an active researcher for a long time, and it is best to talk to someone one else to get an informed opinion. Journalists should also systematically ask for references to the articles researchers rely on for their judgments and, where applicable, request that these items be made available online to readers, listeners and viewers.”

So, far from wanting to “import the hierarchy of the academic debate into the public debate”, as Xavier Ragot puts it, we simply want for non-specialists to be better informed about the academic debate, so that they are able to distinguish what are matters of uncertainty (or consensus) among researchers with regard to the political options being presented.

 

[1] Golden Holocaust: La Conspiration des industriels du tabac, Sainte Marguerite sur Mer, Équateurs, 2014.

[2] Les Marchands de doute. Ou comment une poignée de scientifiques ont masqué la vérité sur des enjeux de société́ tels que le tabagisme et le réchauffement climatique, Paris, Editions le Pommier, 2012.

[3] Manifeste des économistes atterrés (2010) and Nouveau manifeste des économistes atterrés (2015), éditions LLL.

[4] Editions LLL 2015.

 




“The economic negationism” of Cahuc and Zylberberg: the first-order economy

By Xavier Ragot

The book by Pierre Cahuc and André Zylberberg[1] is an injunction to take scientific truths about economics into account in the public debate, in the face of interventions that conceal private and ideological interests. The book contains interesting descriptions of the results of recent empirical work using natural experiments for the purpose of evaluating economic policies in the field of education, tax policy, the reduction of working hours, etc.

However, assertions in the book that are at the borderline of reason ultimately make it a caricature that is probably counter-productive. More than just the debate over the 35-hour working week or France’s CICE tax credit, what is at stake is the status of economic knowledge in the public debate.

1) Has economics become an experimental science like medicine and biology?

The heart of the book is the claim that economic science produces knowledge to treat social ills that is on the same scientific level as medicine. I do not believe this is true. Consider this quote from the winner of the 2015 Nobel Prize in Economics, Angus Deaton:

“I argue that experiments have no special ability to produce more credible knowledge than other methods, and that actual experiments are frequently subject to practical problems that undermine any claims to statistical or epistemic superiority.” (Deaton 2010)

The charge is serious; the point is not to deny the contributions of economic experiments but to understand their limitations and to recognize that there are many other approaches in economics (natural or controlled experiments constitute only a small percentage of the empirical work in economics).

What are the limits of experiments? Natural experiments serve only to measure average first-order effects without measuring secondary effects (so-called general equilibrium effects) that can significantly change the results. A well-known example: the work of the Nobel laureate Heckman (1998) in the economics of education, which showed that, at least in some cases, these general equilibrium effects significantly affect the results of experiments.

Moreover, experiments are not able to take into account the heterogeneity of the effects on populations, to accurately measure the confidence intervals, etc. I’ll leave these technical discussions to the article by Deaton. It should also be noted that the power to generalize from natural experiments is often weak, as these experiments are by their nature not reproducible.

Let’s take an example: Cahuc and Zylberberg use the study by Mathieu Chemin and Etienne Wasmer (2009) comparing the effects of the reduction of working time between Alsace and the whole of France to identify the impact on employment of an additional reduction of 20 minutes of working time. This work finds no impact from an additional 20-minute reduction in working time on employment. Can we conclude that the transition to 35 hours, a reduction in working time more than ten times as great, has no impact on employment? Could there be interaction effects between lowering social contributions and reducing working time? I don’t think it can be said that simply reducing working time creates jobs, but it seems difficult to claim scientifically that the transition to 35 hours did not create jobs based on the studies cited (the authors also draw on the example of Quebec, where the reduction was much greater).

The economist uses data in much more diverse ways than presented by Cahuc and Zylberberg. The book does not discuss laboratory experiments conducted in economics (see Levitt and List, 2007). Further, the relationship of economics to data is undergoing change as digital distribution creates vast access to data (“big data” in short). Econometric techniques will in all likelihood make more intense use of structural econometrics. In a recent work (Challe et al., 2016), we develop, for example, a framework for using both microeconomic and macroeconomic data to measure the impact of the great recession in the US. Finally, there has been a renewal of economic history and long-series studies. The work of Thomas Piketty is an example that has not gone unnoticed. Other work, including on financial instability (especially that by Moritz Schularik and Alan M. Taylor), also uses long time periods to enhance intelligibility. In short, the relationship of data to economics involves multiple methods that can yield conflicting results.

This is no mere detail: the scientistic approach of the book is reductive. The book by Zylberberg Cahuc advances a faith in the knowledge drawn from natural experiments that I don’t believe has a consensus in economics.

2) How to sidestep major questions

Here is a concrete illustration of the problem with this approach. The authors render a severe verdict on France’s CICE tax credit (the government’s reduction of employer social charges on up to 2.5 times the minimum wage, the SMIC). The main argument is that it is well known that reducing charges in the neighbourhood of the SMIC has a much bigger impact on employment than for higher wage levels. This last point is true – but the authors are sidestepping the real issue. What is it?

The early years of the euro have seen an unprecedented divergence in labour costs and inflation between European countries. Up to the 1990s, these differences were handled over the years by devaluations / revaluations. But the single currency has made this no longer possible. The question facing economists looking at this situation is whether the euro zone can survive such misalignments (see the recent position of Stiglitz on this subject). The discussion has been focused on establishing internal devaluations in overvalued European countries and boosting wages in undervalued countries. To this end, Germany established a minimum wage, some countries cut the salaries of civil servants, while others lowered their social contributions (the CICE tax credit in France), in the knowledge that other fiscal tools are also possible (see Emmanuel Farhi, Gita Gopinath and Oleg Itskhoki, 2013). The crucial question is therefore: 1) Is an internal devaluation necessary in France, and if so how much? 2) And how could a non-recessionary internal devaluation be implemented without increasing inequality?

So there is clearly a problem if one answers these questions based on the impact of reductions of social charges near the SMIC wage level. This shows the danger of basing oneself solely on results measurable by experiments: it neglects key issues that cannot be decided by this method.

3) The problem of “Keynesianism”

The authors claim that Keynesianism provides fertile soil for negationism even while stating in the book that Keynes’ recipes sometimes work, but not all the time, which any economist would acknowledge. In the absence of clarification, these remarks become problematic. Indeed, recent years (following the 2008 subprime crisis) have witnessed a return of Keynesian approaches, as can be seen in recent publications. I would go so far as to say that we are living in a Keynesian moment, with great financial instability and massive macroeconomic imbalances (Ragot, 2016).

What then is Keynesianism? (It is not, of course, fiscal irresponsibility with ever greater public debt). It is the claim that price movements do not always allow markets to operate normally. Prices move slowly, wages are downwardly rigid, nominal interest rates cannot be very negative, etc. Because of all this, there are demand externalities that justify public intervention to stabilize the economy. The French debate generates concepts like “Keynesianism” and “liberalism” that have no real meaning in economic science. It is the role of the scientist to avoid false debates, not to perpetuate them.

4) Should we listen only to researchers publishing in the top journals?

The public debate differs greatly from the scientific debate in both purpose and form. Cahuc and Zylberberg want to import the hierarchy of academic debate into the public debate. This won’t work.

There will always be a need for non-academic economists to discuss economic issues. The economic situation raises problems where there is no academic consensus. The business press is full of advice from bank economists, markets, institutions and trade unions, all of whom have legitimate, though non-academic, points of view. Newspapers like Alternatives Economiques, quoted by Cahuc and Zylberberg, present their views, as does the Financial Times, which has a mix of genres. Economists without formal academic credentials play a legitimate role in this debate, even if their opinions differ from those of other researchers with longer CVs.

These contradictions are concretely lived at the OFCE, whose mission is to contribute to the public debate with academic rigor. This is a very difficult exercise; it requires knowledge of the data, the legal framework, and the academic literature produced by institutions such as the Treasury, the OECD, the IMF, and the European Commission. Knowledge of the economic literature is essential, but it is far from sufficient to make a useful contribution to the public debate.

The willingness of economists to contribute to the public debate was exemplified in the various petitions around the El Khomry law. These petitions widely debated the effect of redundancy costs on hiring and the form of the employment contract, but not the overturning of norms (a subject that to my knowledge is impossible to evaluate rigorously) – even though this is at the heart of the debate between the government and the trade unions! It is not certain that the idea of a consensus among economists will emerge strengthened by this episode.

5) When a consensus exists in economics, do we have to listen to it?

The consensus before the subprime crisis was that financialization and securitization were factors promoting economic stabilization, because of risk allocation, etc. Microeconomic studies confirmed these intuitions, because they failed to capture the real source of financial instability, which was the correlation of risks in investor portfolios. We now know that the consensus was wrong. Some economists outside the consensus, such as Roubini or Aglietta, and some economics journalists such as The Economist, warned of the destabilizing effects of finance, but they were outside the consensus.

Policy (and the public debate) is forced to ask: what will happen if the consensus is wrong? It has to manage all the risks – that’s its responsibility. The consensus view among economists is frequently not very informative about the diversity of viewpoints and the risks involved. The public voice of economists outside the consensus is necessary and useful. For example, the Nobel Prize in Economics was awarded to Eugene Fama and Robert Schiller, who both studied financial economics. The first asserts that financial markets are efficient, and the second that financial markets generate excessive volatility. Newspapers carry visions outside the consensus, such as Alternatives Economiques in France (at least it’s in the title). These publications are useful to public discussion, precisely because of their openness to debate.

In science, the diversity of methods and knowledge about methodology outside the consensus enrich the debate. For the same reason, I tended to be against the creation of a new section of heterodox economists, supported by the French association of political economists (AFEP), because I see an intellectual cost to the segmentation of the world of economists. For the same reason, giving a consensus among economists the status of truth (Cahuc, Zylberberg, p. 185) is troublesome, because it ignores the contributions of the “minority” effort.

6) “Economic negationism: radicalization of the discourse

The authors castigate ideological criticisms of economics that are unfamiliar with the results or even the practice of economists. The science of economics has strong political implications, and is therefore always attacked when generates disturbing results. Some criticisms lower the intellectual debate to the level of personal insults. A defence of the integrity of economists is welcome, but it requires real learning and modesty to explain what is known and what is not known.

On reading the book by Cahuc and Zylberberg, it seems that the authors take up the arms of their opponents: two camps are defined (real science and deniers), doubts are planted about the intellectual honesty of pseudo-scientists outside the consensus, we proceed by amalgamation, by mixing intellectuals (Sartre) and academic economists. The very title of the book proceeds from great violence. This book is on a slippery slope in the intellectual debate that is heading towards a caricature of debate and verbal abuse. Every economist involved in the public debate has already been insulted by people who disagree with the results presented for purely ideological reasons. Insults need to be fought, but not by suggesting that debate can be avoided due to one’s academic status.

The debate in England on Brexit showed how economists and experts were rejected because of their perceived arrogance. I’m not sure that the scientistic position of the book offers a solution to these developments in the public debate. To quote Angus Deaton once again, in a recent interview he did with the newspaper Le Monde:

“To believe that we have all the data is singularly lacking in humility. … There is certainly a consensus in economics, but its scope is much narrower than economists think.”

 

References

Angus Deaton, 2010, “Instruments, Randomization, and Learning about Development“, Journal of Economic Literature, 48, 424-455.

Edouard Challe, Julien Matheron, Xavier Ragot and Juan Rubio-Ramirez, “Precautionary Saving and Aggregate Demand”, Quantitative Economics, forthcoming.

Matthieu Chemin and Etienne Wasmer, 2009 : “Using Alsace-Moselle Local Laws to Build a Difference-in-Differences Estimation Strategy of the Employment Effects of the 35-hour Workweek Regulation in France”Journal of Labor Economics, vol. 27(4), 487-524.

Emmanuel  Farhi, Gita Gopinath and Oleg Itskhoki, 2013, “Fiscal Devaluations”, Review of Economic Studies, 81 (2), 725-760.

James J. Heckman, Lance Lochner and Christopher Taber, 1998, “General-Equilibrium Treatment Effects: A Study of Tuition Policy”, The American Economic Review, 381-386.

Steven D. Levitt and John A. List, 2007, “What Do Laboratory Experiments Measuring Social Preferences Reveal About the Real World?”, Journal of Economic Perspectives, Vol. 21, no. 2, 153-174.

Xavier Ragot, 2016, “Le retour de l’économie Keynésienne“, Revue d’Economie Financière.

 

[1] Pierre Cahuc and André Zylberberg, Le négationnisme économique et comment s’en débarrasser, Paris, Flammarion, 2016.