The ECB is still worried about the weakness of inflation

By Christophe Blot, Jérôme Creel and Paul Hubert

The President of the European Central Bank, Mario Draghi, recently announced that the increase in the ECB’s key interest rate would come “well past” the end of the massive purchases of bonds (scheduled for September 2018), mainly issued by the euro zone countries, and at a “measured pace”. The increase in the key rate could therefore occur in mid-2019, a few weeks before the transfer of power between Mario Draghi and his successor.

In his quarterly hearing with MEPs, Mario Draghi proved to be cautious about the intensity and sustainability of the economic recovery [1]. Listening to him, the euro zone has not necessarily closed its output gap (actual GDP would have remained below its potential) despite the recovery in recent quarters. This is not the time to change the direction of monetary policy at the risk of weakening the recovery. It is also undeniable that the effects of the recovery are only materializing slowly and gradually in wage increases, which partly explains why the euro zone inflation rate remains below its mid-term target.

The ECB President has also been confident that companies are gradually anchoring their price (and wage) expectations on the ECB’s inflation target of 2% per year. Mario Draghi also appeared very confident in the effectiveness of monetary policy. He announced that the measures undertaken since 2014 would contribute to a (cumulative) increase of 2 percentage points, respectively in real growth and inflation between 2016 and 2019.

If the ECB’s forecast of inflation back to its target in 2019 is contradicted by Hasenzagl et al. (2018), we find these same determinants of European inflation. In a recent study, we also show that the two main determinants of inflation in the euro area are inflation expectations and wage growth. Without anchoring the former on the medium-term target of the ECB and without a second-round effect of monetary policy on wages, inflation will not return to its target in the short term. Structural reforms may have increased potential GDP, as argued by Mario Draghi, but they have so far more certainly weighed on wage and price developments.

 

[1] Once a quarter, a monetary dialogue is organized between the President of the ECB and the members of the Monetary Affairs Committee of the European Parliament. This dialogue allows the President of the ECB to explain the direction of monetary policy in the euro area and to express his point of view on topics defined upstream. Une fois par trimestre un dialogue monétaire est organisé entre le Président de la BCE et les membres de la Commission des Affaires monétaires du Parlement européen. Ce dialogue permet au Président de la BCE d’expliquer l’orientation de la politique monétaire dans la zone euro et d’exprimer son point de vue sur des sujets définis en amont.

 




Missing deflation – unique to America?

By Paul Hubert, Mathilde Le Moigne

Was the way inflation unfolded after the 2007-2009 crisis atypical? According to Paul Krugman: “If inflation [note: in the United States] had responded to the Great Recession and aftermath in the same way it did in previous big slumps, we would be deep in deflation by now; we aren’t.” Indeed, after 2009, inflation in the United States remained surprisingly stable given actual economic developments. Has this phenomenon, which has been described as “missing deflation”, been observed in the euro zone?

Despite the deepest recession since the 1929 crisis, the inflation rate remained stable at around 1.5% on average between 2008 and 2011 in the United States, and 1% in the euro zone. Does this mean that the Phillips curve, which links inflation to real activity, has lost its empirical validity? In a note in 2016, Olivier Blanchard recalls on the contrary that the Phillips curve, in its simplest original version, remains a valid instrument for understanding the links between inflation and unemployment, despite this “missing disinflation”. Blanchard notes, however, that the link between the two variables has weakened because inflation is increasingly dependent on expectations of inflation, which are themselves anchored in the US Federal Reserve’s inflation target. In their 2015 article, Coibion ​​and Gorodnichenko explain the missing deflation in the United States by the fact that inflation expectations tend to be influenced by the most visible price changes, such as changes in the price of a barrel of oil. Since 2015, we have seen a drop in inflation expectations concomitant with the decline in oil prices.

The difficulty in accounting for recent changes in inflation by using the Phillips curve led us in a recent article to evaluate its potential determinants and to consider whether the euro zone has also experienced a phenomenon of “missing deflation”. Based on a standard Phillips curve, we did not find the conclusions of Coibion and Gorodnichenko when we consider the euro zone as a whole. In other words, real activity and inflation expectations give a good description of the way inflation is behaving.

This result seems to come, however, from a bias in aggregation between national inflation behaviours in the euro zone. In particular, we find a notable divergence between the countries of northern Europe (Germany, France), which show a general tendency towards missing inflation, and the more peripheral countries (Spain, Italy, Greece), which are exhibiting periods of missing deflation. This divergence nevertheless shows up from the beginning of our sample, that is to say, in the first years when the euro zone was created, and seems to be absorbed from 2006, without undergoing any notable change during the 2008-2009 crisis.

In contrast to what happened in the United States, it seems that the euro zone did not experience missing deflation as a result of the 2008-2009 economic and financial crisis. On the contrary, it seems that divergences in inflation in Europe predate the crisis and tended to be absorbed by the crisis.

 




How can Europe be saved? How can the paradigm be changed?

By Xavier Ragot

There are new inflections in the debate over the construction of Europe. New options from a variety of economic and political perspectives have seen the light of day in several key conferences and workshops, though without the visibility of public statements. The debate is livelier in Germany than in France. This is due probably to the caricature of a debate that took place during France’s presidential elections, which took the form of “for or against the single currency”, while the debate needed was over how to orient the euro area’s institutions to serve growth and deal with inequalities.
Two conferences were held in Berlin one week apart that considered opposing options. The first tackled the consequences of a country leaving the euro area; the second examined an alternative paradigm for reducing inequalities in Europe. In other words, the two conferences covered almost the entire spectrum of conceivable economic policies.

Sowing fear: the end of the euro area?

The first question: What would happen if one or more countries left the euro area? Should we hope for this, or how could we prevent it? A conference held on March 14 under the title “Is the euro sustainable – and what if it isn’t?” brought together the heads of influential institutes like Clemens Fuest, one of the five German “wise men”, Christoph Schmidt, and economists frequently seen in the German media like Hans-Werner Sinn, as well as economists like Jeromin Zettelmeyer. The presence of the OFCE, which I represented, hopefully helped to serve as a reminder of some simple but useful points.

This first conference sometimes played with the ambiguity of the issue, with some contributions seeming to wish for an end to the euro area while others were more analytical in order to show the risks. The voice of Hans-Werner Sinn stood out during this discussion for its radical stance. Without going so far as to wish that Germany left the euro area, Sinn insisted in a systematic (and skewed) way that Germany was suffering under Europe’s monetary policy. He insisted in particular on the role of Germany’s hidden exposure to the debt of other countries through the European Central Bank and TARGET2, which books the surpluses and deficits of the national central banks vis-à-vis the ECB. The TARGET2 balance shows that the southern European countries are running a deficit, while Germany has a substantial surplus of almost 900 billion euros, which represents 30% of German GDP. These amounts are very significant, but do not in any way represent a cost for Germany.

In the most extreme case of a national central bank’s failure to pay (i.e. an exit from the euro area), the loss would be shared by all the other states independently of the surpluses. The TARGET2 balances are part of Europe’s monetary policy, which is aimed at achieving a goal that was agreed on: an average inflation level of 2%. This target has not been hit for many years. Moreover, this policy has led to low interest rates that benefit Germans who pay low interest charges on their public debt, as Jeromin Zettlemeyer pointed out. Finally, Germany’s large trade surplus shows that the lack of an exchange rate mechanism in the euro area has benefited Germany significantly. Recall that the volume of Germany’s exports exceeded China’s in 2016, according to the German institute Ifo!

My presentation was based on the OFCE’s numerous studies of the European crisis. The OFCE has published an analytical note on the effects of an exit from the euro area, showing all the related costs. The studies by Durand and Villemot provide the analytical basis for providing orders of magnitude. How much would Germans’ wealth decline if the euro area were to collapse? The result is, in the end, not very surprising. The Germans would be the greatest losers, with a loss of wealth on the order of 15% of GDP. These figures are of course very tentative and need to be interpreted with the utmost care. The collapse of the euro area would plunge us into unexplored territory, which could surprise us with unexpected sources of instability.

After these preliminary elements, the heart of my presentation was then focused on a simple point. The real challenge facing us is to build coherent labor markets within the euro area, while reducing inequalities. Following on the common monetary policy, the coordination of fiscal policy that was carried out so painfully after 2014 and the aberrations associated with the recessionary fiscal policy (austerity), the main question facing Europe over the next ten years is to develop coherent labor markets. Indeed, Germany’s wage moderation, the result of the difficulties with reunification in the early 1990s, has been a powerful destabilizing force in Europe, as was shown in an article by Mathilde Le Moigne. What is called the supply problem in France is in fact the result of divergences within Europe on the labor market in the wake of Germany’s wage moderation. I proposed that the European Parliament initiate a Europe-wide discussion of national wage dynamics in order to bring about the convergence of wages in a non-deflationary way while avoiding high unemployment in southern Europe. This co-ordination of economic policy on the labor market is designated by the English term “wage stance”. Co-ordination of changes in minimum wages and in regulated wages, which orients the direction of wage changes in labour negotiations, are tools for the co-ordination of labor markets.

A second tool is of course the establishment of a European system of unemployment insurance, which would be much less complex than one might think. A European unemployment insurance would aim to be complementary to national unemployment insurance, and not a replacement. National unemployment insurance systems are actually heterogeneous because, on the one hand, the labour markets are distinct, and on the other hand national preferences differ. Unemployment insurance systems are for the most part the result of historical social compromises.

How should this relatively radical German stance against Europe be interpreted today? Perhaps it represents the discontent of economists who are losing influence in Germany. It might seem paradoxical, but many German economists and observers are adjusting to recognize the necessity of building a different Europe, one not based on rules, but leaving room for political choices within strong institutions – i.e. for agile, well respected institutions rather than rules. This position is associated with France in the European debate: choices rather than rules. The German coalition agreement that paved the way for an SPD/CDU government has placed the issue of Europe at the center of the agreement, but with a great deal of vagueness about the content. Certain developments will test the relevance of this hypothesis, in particular the issue of a euro area minister and the nature of the decision-making rules within the key crisis-resolution mechanism, the European stability mechanism.

Europe: Changing the software / model / paradigm / narrative

A second, more confidential conference proved to be even more exciting, with the presence of the European Climate Foundation on the climate issue, the INET institute on developments in economic thought, and the OFCE on European imbalances. The aim of the conference was to reflect on a shift in the paradigm, or narrative, and come up with a new articulation between politics and economics, the state and the market, in order to think sustainable growth in terms of both the climate and society. A narrative is a vision of the world conveyed by simple language. Thus the “neoliberal” narrative is built on positive words like “competition”, “markets” and “freedom” as well as negative words like “profit”, “interventionism” and “egalitarianism”, which allowed the creation of a language. Donald Trump produces an equally effective narrative: “giving power back to the people”, “America first”; this narrative marks the return of politics to a mode that assumes an underlying nationalism.
How could another narrative be built that has a central focus on the evidence for the fight against global warming and the aggravation of inequality and financial instability?

For one day economists who are renowned in Europe spoke about artificial intelligence, global warming, current forms of economic and industrial policies, the dynamics of credit and financial bubbles, and more. Empirical work at the forefront of current research as well as reflections about the possibility of a coherent storyline were combined in the promise of an alternative narrative. It was just the start. The possibility of a renewal of thought that transcended political divisions and spoke about what was essential came to light: how could the economy be placed at the service of a political project that aims not to rebuild borders to exclude but to imagine our common humanity?

These two conferences show the vitality of the European debate, which is presented from an overly technical perspective in France. The raison d’être of the euro is a common project. It is at this level that we need to conduct the discussion leading into the 2019 European elections.

 




The minimum wage: from labour costs to living standards. Comparing France, Germany and the UK

By Odile Chagny, IRES, Sabine Le BayonCatherine Mathieu, Henri Sterdyniak, OFCE

Most developed countries now have a minimum wage, including 22 of the 28 EU countries. France has long stood out for its relatively high minimum wage, the SMIC. But in 1999, the United Kingdom introduced a minimum wage, and the British government’s goal is to raise this level to 60% of the median wage by 2020, which would bring it to the level of France’s SMIC and among the highest-ranking countries in the OECD. More recently, in 2015, Germany also introduced a minimum wage.

Note that gross pay is a legal concept. What matters from an economic point of view is the cost of labour for a firm as well as the disposable income (including benefits and taxes) of a household in which employees earn the minimum wage.

In OFCE Policy Brief no. 34 we present a comparison of the minimum wages in force in 2017 in these three countries, using standard cases, from the viewpoint first of the cost of labour and then with respect to employees’ standard of living.

It appears that the cost of labour is slightly higher in Germany than in France, and much more so than in the United Kingdom, and that the reforms announced in France for 2019 (reducing contributions) will strengthen France’s competitive advantage vis-à-vis Germany. The cost of labour at the minimum wage is therefore not particularly high in France (Table).

Tabe_post_ENG

With regard to disposable income, a comparison of different arrangements for working time and family situations highlights different logics in the three countries. In Germany, the underlying rationale is to protect families from poverty, regardless of the parents’ working situation. In France, in contrast, a family with two children has to have two people working full-time at the SMIC to escape poverty, as the tax-benefit system seeks to encourage women’s integration into the labour market. France is thus the only one of the three countries where a mono-active family with two children, one of whose parents works full-time at the minimum wage, falls below the monetary poverty line (Figure).

Graphe_post23-3_ENGFrom the point of view of the relative position of minimum wage earners in relation to the general population, our study highlights the rather favourable situation of the United Kingdom. The living standard there is comparatively high: all the families considered in our typical cases have a standard of living above the poverty line, on the order of 30% higher for a family where both parents work full-time at the minimum wage. The gain from taking up a job is, as in France, high, while it is low in Germany in all the configurations.

Finally, our analysis is contributing to the debate about the establishment of a Europe-wide minimum wage. A policy to harmonize the minimum wage in Europe, as this is conceived by the European Federation of Trade Unions and supported by France, cannot be thought of solely in terms of labour income, but also needs to take into account the goals targeted in terms of living standards, especially for families.

 

 




The dilemmas of immaterial capitalism

By Sarah Guillou

A review of: Jonathan Haskel and Stian Westlake, Capitalism Without Capital. The Rise of the Intangible Economy, Princeton University Press, 2017, 288 pp.

This book is at the crossroads of the debate about the nature of current and future growth. The increasing role of intangible assets is indeed at the heart of questions about productivity gains, the jobs of tomorrow, rising inequality, corporate taxation and the source of future incomes.

This is not simply the umpteenth book on the new economy or on future technological breakthroughs, but more fundamentally a book on the rupture being made by modes of production that are less and less based on fixed, or material, capital and increasingly on intangible assets. The digressions on an immaterial society are not new; rather, the value of the book is that it gives this real economic content and synthesizes all the research showing the economic upheavals arising from the increasing role of this type of capital.

Jonathan Haskel and Stian Westlake describe the changes brought about by the growth in the share of immaterial assets in the 21st century economy, including in terms of the measurement of growth, the dynamics of inequality, and the ways in which companies are run, the economy is financed and public growth policies are set. While the authors do not set themselves the goal of building a new theory of value, they nevertheless provide evidence that it does need to be reconstructed. This is based in particular on the construction of a database – INTAN-invest – as part of a programme financed by the European Commission and initiated by the American studies of Corrado, Hulten and Sichel (2005, 2009).

By immaterial assets is meant the immaterial elements of an economic activity that generate value over more than one period: a trademark, a patent, a copyright, a design, a mode of organization or production, a manufacturing process, a computer program or algorithm that creates information, but also a reputation or a marketing innovation, or even the quality and / or the specific features of staff training. These are assets that must positively increase a company’s balance sheet; they can depreciate with time; and they result from the consumption of resources and therefore from immaterial or intangible investment. There is a broad consensus on the importance of these assets in explaining the prices of the goods and services we consume and in determining the non-price competitiveness of products. These assets are determining elements of “added value”.

However, despite this consensus, the measurement of intangible assets is far from commensurate with their importance. Yet measuring assets improperly leads to many statistical distortions, with respect to: first, the measurement of growth – because investments increase GDP – second, the measurement of productivity – because capital and added value are poorly measured – and finally, to profits and perhaps also the distribution of added value if intangible capital is included in expenditure and not in investment. The authors show in particular that the increasing importance of intangible assets can explain the four arguments underpinning secular stagnation. First, the slowdown in productivity could be the result of an incorrect valuation of intangible added value. Furthermore, the gap between the profits of companies and their book value could be explained by an incomplete accounting of intangible assets that underestimates capital, in addition to the slowdown in investment despite very low interest rates. Finally, the increase in the inequalities in productivity and profits between firms is the result of the characteristics of intangible assets, which polarize profits and are associated with significant returns to scale.

Awareness of the measurement problem is not recent. The authors recall the major events that brought the experts together to deal with the measurement of intangible assets. They cover up to the latest reform of the systems of national accounts that enriches the GFCF of R&D, including the SNA, 2008, in particular the writing of the Frascati Manual (1963, 2015), which lays the foundations for the accounting of R&D activity. But even today it is not possible to account for all intangible assets. This is due in part to the fact that there is still some reluctance in corporate accounting with respect to integrating intangible capital insofar as it has no market price. So while it is simple to book the purchase of a patent as an asset, it is much more difficult to value the development of an algorithm within a company or to give a value to the way it is organized or to innovative manufacturing processes, or to its internal training efforts. Only when something is traded on a market does it acquire an external value that can be recorded, unhesitatingly, on the asset side of the balance sheet.

Nevertheless, the challenge in measuring this is fundamental if we believe the rest of the book. Indeed, the increasing immateriality of capital has consequences for inequalities (Chapter 6), for institutions and infrastructure (Chapter 7), for financing the economy (Chapter 8), for private governance (Chapter 9) and for public governance (Chapter 10).

The stakes here are critical because of the specific characteristics of these immaterial assets, which are summarized in the “four S’s” (Chapter 2): “scalable, sunkedness, spillovers and synergies”. This means, first, that immaterial assets have the particularity of being able to be deployed on a large production scale without depreciating (“scalable”). Second, they are associated with irrecoverable expenses, that is, once the investment has been made it is difficult for the company to consider selling the asset on a secondary market, so there is no turning back (“sunkedness”). Next, these assets have “spillovers”, or in other words, they spread beyond their owners. Finally, they combine easily by creating “synergies” that increase profitability.

These characteristics imply a modification of the functioning of capitalism, which we are all already witnessing: they give a premium to the winners, they exacerbate the differences between the holders of certain intangible assets and those who are engaged in more traditional activities, they polarize economic activity in large urban centres, and they overvalue the talents of managers capable of orchestrating synergies between immaterial assets. At the same time, the prevalence of these assets requires modified public policies. This concerns first, the protection of the property rights of these intangible assets, which are intellectual in nature and difficult to fully appropriate due to their volatility. Even though intellectual property rights have long been established, they now face two challenges: their universal character (many countries apply them only sparingly) and achieving a balance (they should not lead to creating complex barriers that render it impossible for new innovators to enter, while they should be sufficiently protective to allow the fruits of investments to be harvested). Moreover, spillover effects need to be promoted by ensuring a balance in the development of cities and the interactions between individuals, while also creating incentives to the financing of intangible investments. Bank financing, which is based on tangible guarantees, is not well suited to the new intangible economy, especially as it benefits from tax advantages by deducting interest from taxable income. It is therefore important to develop financing based on issuing shares and developing public co-financing. More generally, the public policy best suited to the intangible economy involves creating certainty, stability and confidence, in order to deal with the intrinsic uncertainty of risky intangible investments.

What emerges from this reading is a clear awareness of the need to promote the development of investment in immaterial assets, but also a demonstration that the growing immateriality of capital is giving rise to forces driving inequality. This duality can prove problematic.

More specifically, three dilemmas are identified. The first concerns the way intangible investments are financed. The highly risky nature of intangible investments – because they are irrecoverable, collateral-free and with an uncertain return – calls for investors to take advantage of diversification and dispersal. And yet, as the authors show, what companies in this new economy need are investors who hold large, stable blocks of shares so as to be engaged in the company’s project. The second dilemma concerns state support. It is justified because these have a social return that goes beyond their private return and, in the face of shortfalls in private financing, public financing is necessary. However, corporate taxation has not yet adapted to this new sources of wealth creation, and states face growing difficulties in raising taxes and identifying the taxable base. Furthermore, states are competing to attract businesses into the new economy through fiscal expenditures and subsidies. The third dilemma is undoubtedly the most fundamental. This involves the contradiction between inequalities, whether in the labour market (job polarization [1]), in the goods market (concentration) or geographically (geographical polarization), which are caused by the rise of intangible capital, on the one hand, and on the other hand the need for strong social cohesion, trustworthiness and human urban centres that provide favourable terrain for the development of the synergies and exchanges that nourish intangible assets. In other words, the inequalities created affect the social capital, which is detrimental to the future development of intangible assets.

It is in the resolution of these dilemmas that this new capitalism will be able to be in accord with our democracies.

 

[1] See Gregory Verdugo: “The new labour inequalities. Why jobs are polarizing”, OFCE blog.

 




The 2018 European economy: A hymn to reform

By Jérôme Creel

The OFCE has just published the 2018 European Economy [in French]. The book provides an assessment of the European Union (EU) following a period of sharp political tension but in an improving economic climate that should be conducive to reform, before the process of the UK’s separation from the EU takes place.

Many economic and political issues crucial to better understanding the future of the EU are summarized in the book: the history of EU integration and the risks of disintegration; the recent improvement in its economic situation; the economic, political and financial stakes involved in Brexit; the state of labour mobility within the Union; its climate policy; the representativeness of European institutions; and the reform of EU economic governance, both budgetary and monetary.

The year 2018 is a pivotal year prior to the elections to the European Parliament in spring 2019, but also before the 20th anniversary of the euro on 1 January 2019. The question of the euro’s performance will be central. However, in 2018 gross domestic product will finally begin to increase at well above its pre-crisis level, thanks to renewed business investment and the support of monetary policy, henceforth unhindered by fiscal policy.

The year 2018 will also mark the beginning of negotiations on the future economic and financial relationship of the United Kingdom and the EU, after at end 2017 the two parties found common ground on arrangements for the UK leaving the Union. The EU’s renewed growth will reduce the potential costs of the divorce with the British and could also lessen Europeans’ interest in this issue.

Brexit could have served as a catalyst for reforming Europe; the fact that the mechanisms for this may now seem less crucial to the EU’s future functioning should not take away from the reforms needed by the EU, as if these were superfluous. In the political and monetary fields, there is a great need to strengthen the democratic representativeness of EU institutions (parliament, central bank) and to ensure the euro’s legitimacy. In the fields of fiscal and immigration policy, past experience has demonstrated the need for coordinated tools to better manage future economic and financial crises.

There is therefore an urgent need to revitalize a project that is over sixty years old, one that has managed to ensure peace and prosperity in Europe, but which lacks flexibility in the face of the unpredictable (crises), which lacks vigour in the face of the imperatives of the ecological transition, and which is singularly lacking in creativity to strengthen the convergences within it.

 




France’s growth in 2018-2019: What the forecasters say …

By Sabine Le Bayon and Christine Rifflart

Following the INSEE’s publication of the first version of the accounts for the fourth quarter of 2017 and a first estimate of annual growth, we have been considering the outlook for 2018 and 2019 based on a comparative analysis of forecasts made for France by 18 public and private institutes, including the OFCE, between September and December 2017. This post presents the highlights of this analysis, which are given in detail in OFCE Policy Brief No. 32 of 8 February 2018 entitled, “A comparison of macroeconomic forecasts for France” and the associated working paper (No. 06-2018) (which contains the tables of the institutes’ forecasts).

Following the deep recession of 2008-2009 and the euro zone crisis of 2011, the French economy started a slow recovery, which picked up pace in late 2016. The year 2017 was thus a year of recovery, with slightly higher growth than most forecasters had recently expected: 1.9% according to the INSEE’s first estimate, compared to an average forecast of 1.8%. This momentum is expected to continue in 2018 and 2019, with the forecasts averaging 1.8% and 1.7%, respectively. The standard deviations are low (0.1 point in 2018 and 0.2 in 2019), and the forecasts are fairly close for 2018 but diverge more sharply in 2019 (ranging from a low of 1.4% to a high of 2.2%) (Figure 1). In 2019, 5 out of 15 institutes expect growth to accelerate while 8 foresee a slowdown.

IMG1_postSLB-CR_ENG

Overall, all but four of the institutes anticipate a rebalancing of the drivers of growth over the period, with trade having less of an adverse effect than in the past and domestic demand still buoyant (Figure 2). However, the recovery in foreign trade is under debate in light of the chronic losses in market shares recorded since the beginning of the 2000s. Indeed, it seems that the expected pick-up in exports in 2018 will be due more to a recovery in foreign demand for France’s output and to the rundown of the export-oriented stocks accumulated in 2016 and 2017 in certain sectors (in particular transport equipment and aeronautics) than to any recovery in competitiveness. For 2019, there are differences in opinion about the impact of the supply policies implemented since 2013 on French companies’ price and non-price competitiveness. Some institutes expect an improvement in export performance and thus a regain of market share by 2019, while others foresee a loss of share due to insufficient investment in high value-added sectors and labour costs that still burden business.

IMG2_postSLB-CR_ENGThere is also debate over the forecasts for jobs and wages, in particular over the impact of the cutbacks in subsidized jobs, the effect of the policies to lower labour costs in 2019 (transformation of the CICE competitiveness tax credit into lower employer social contributions) and productivity (trend and cycle). On average, the unemployment rate should fall from 9.5% in 2017 to 8.8% in 2019, with forecasts ranging from 8.1% for the most optimistic to 9.2% for the most pessimistic. Some differences in the forecasts on wages can be attributed to differing assessments both of the degree of tension on the labour market and also of the impact on wages of the more decentralized collective bargaining set up in 2017. Wages are expected to rise by 1.8% in 2017 and on average by 1.9% in 2018 and 2% in 2019 (ranging from 1.3% for the lowest forecast to 2.6% for the highest).

In this context, growth will rise much faster than potential growth, which is estimated by most institutes at around 1.25% (some institutes expect an acceleration due to the positive impact of structural reforms and investment, while others foresee lower potential growth). While in 2017, the growth gap – the difference between observed GDP and potential GDP – is clearly negative (between -2.2 and -0.7 points of potential GDP), this will close by 2019. Most of the institutes (from those that provided us with data or qualitative information) believe the output gap will close (close to 0 or clearly positive) and inflationary pressures could appear. For four institutes, the output gap will be around -0.7 point.

Finally, for all the institutes the budget deficit should fall below the threshold of 3% of GDP by 2017. France will exit the excessive deficit procedure in 2018. But despite the vigorous growth, and in the absence of stricter fiscal consolidation, for most of the institutes the public deficit will remain high over the period.

 




Which new path for raising labour productivity?

By Bruno Ducoudré and Eric Heyer

The industrialized countries are experiencing what seems to be a persistent slowdown in the growth of labour productivity since the second oil shock. This has been the subject of a great deal of analysis in the economic literature[1] that considers the possible disappearance of the growth potential of the developed economies, and consequently their inability to return to a level of activity in line with their pre-crisis trajectories. In other words, could the industrialized countries have entered a phase of “secular stagnation”, making it more difficult to reduce public and private debt? The exhaustion of gains in productivity would also modify any diagnosis made of their conjunctural situation, particularly as regards their labour markets.

Trend productivity gains are inherently unobservable; it is therefore necessary to decompose observed productivity into a trend component and a cyclical component that is linked to the more or less rapid adjustment of employment to changes in economic activity (the productivity cycle). In a recent study published in the Revue de l’OFCE, we seek to highlight the slowdown in trend productivity gains and the productivity cycle in six major developed countries (Germany, Spain, the United States, France, Italy and the United Kingdom) using an econometric method – the Kalman filter – so as to allow the estimation of an equation for labour demand based on explicit theoretical underpinnings and the estimation of trend productivity gains.

After reviewing the various possible explanations for the slowdown described in the economic literature, we present the theoretical modelling of the equation for labour demand and our strategy for an empirical estimation. This equation, derived from a CES-type production function[2], is based on the assumption of maximizing the profit of firms in monopolistic competition and on the assumption of a stable long-term capital-to-output ratio. This makes it possible to break down the trend and cyclical components in a single step, but makes productivity gains depend solely on labour[3].

The existing empirical studies usually rely on a log-linear estimate of the productivity trend and introduce fixed-date trend breaks[4]. We propose an alternative method that consists of writing the employment equation in the form of a state-space model representing the underlying productivity trend. This model has the advantage of allowing a less bumpy depiction of trend productivity gains since it doesn’t rely on ad-hoc break dates.

We then evaluate the new growth path for labour productivity and the productivity cycle for the six countries considered. Our results confirm the slowdown in trend productivity gains (Figure 1).

IMG1_post02-02_ENG

The growth rate for trend productivity for five countries (France, Germany, Italy, the United States and the United Kingdom) shows a slow decline since the 1990s. Trend productivity, estimated at 1.5% in the United States in the 1980s, increased during the 1990s with the wave of new technologies, then gradually decreased to 0.9% at the end of the period. For France, Italy and Germany, the catch-up stopped during the 1990s (during the 2000s for Spain), even though the slowdown in trend productivity gains was interrupted briefly between the mid-1990s and the early 2000s. Leaving aside Italy, whose estimated trend productivity gains were zero at the end of the period, the trend growth rates converged in a range of between 0.8% and 1% in annual trend productivity gains.

The estimated productivity cycles are shown in Figure 2. They show the greatest fluctuations for France, Italy and Germany and the United Kingdom. A calculation of the average times for the adjustment of employment to demand indicates an adjustment period of 4 to 5 quarters for these countries. The cycle fluctuates much less for the United States and Spain, indicating that the speed of adjustment of employment to economic activity is faster for these two countries, which is confirmed by the average time of adjustment to demand (respectively 2 and 3 quarters). Finally, the estimates indicate globally that the productivity cycle will have closed for each of the countries considered in the second quarter of 2017.

IMG2_post02-02_ENG

[1] See, for example, A. Bergeaud, G. Cette and R. Lecat, 2016, “Productivity Trends in Advanced Countries between 1890 and 2012”, The Review of Income Wealth, (62: 420-444) and N. Crafts and K. H. O’Rourke, 2013, “Twentieth Century Growth”, CEPR Discussion Papers.

[2] See C. Allard-Prigent, C. Audenis, K. Berger, N. Carnot, S. Duchêne and F. Pesin, 2002, “Présentation du modèle MESANGE”, French Ministère de l’Economie, des finances et de l’industrie, Forecasting Department, MINEFI, Working document.

[3] The equation for labour demand is based on a production function and an assumption of neutral technical progress in Harrod’s sense.

[4] See M. Cochard, G. Cornilleau and E. Heyer, 2010, “Les marchés du travail dans la crise” [Labour Markets in Crisis], Économie et Statistique, (438: 181-204) and B. Ducoudré and M. Plane, 2015, “Les demandes de facteurs de production en France” [The Demand for Production Factors in France], Revue de l’OFCE (142: 21-53).




No love lost for Chinese investors!

By Sarah Guillou

In his speech of 15 January 2017, France’s Minister of Economy and Finance, Bruno Le Maire, speaks of “plundering investments”, suspecting Chinese investors of wanting to “loot” French technology. These statements inscribe the Minister of the French Economy in line with economic patriotism from Colbert to Montebourg, but this time, they are part of a broader movement of distrust and resistance to investment from China that is hitting all the Western countries. And while the French government is planning to expand the scope of decrees controlling foreign investment, many other countries are doing the same.

France is not the only country to want to modify its legislation to reinforce the grounds for controlling foreign investors. The inflow of foreign capital was primarily viewed as a contribution of financial resources and a sign of a territory’s attractiveness. France has always been well placed in international rankings in these terms. In 2015, France ranked eleventh in the world in terms of foreign direct investment inflows, with USD 43 billion, mainly from developed countries (compared with USD 31 billion for Germany and 20 billion for Italy). And since French resident investors have invested USD 38 billion abroad (Germany and Italy, USD 14 and 25 billion respectively), the balance is in favor of productive capital inflows, which exceed capital outflows.

However, France has always distinguished itself by its greater political mistrust of foreign equity, especially when it comes to its “flagship” industries. But now this mistrust is being echoed in Western countries with regard to Chinese investors, and not only across the Atlantic where all the political actors have had to sing in tune with the economic patriotism of the Trump administration. Chinese investors are also perceived as predators by the Germans, the British, the Australians, and the Italians, to name just a few.

It must be said that China’s industrial strategy is very proactive, and the external growth strategies of Chinese business is being supported by a policy aimed at moving upmarket and acquiring technology by any means. Moreover, the presence of the State behind the investors – it is characteristic of China to have private and public interests tightly interwoven as well as a strong State presence in the economy because of its communist past – creates potential conflicts of sovereignty. Finally, China is threatening more and more sectors in which Western countries believed they had technological advantages, which is worrying governments (see the Policy Brief de l’OFCE by S. Guillou (no. 31, 2018), “Faut-il s’inquiéter de la stratégie industrielle de la Chine?” [Should we worry about China’s industrial strategy]). Finally, China is not exactly exemplary in terms of taking in foreign investment, as it erects barriers and constraints often associated with technology transfer.

Western countries are reacting by increasing the scale of their controls: issues touching on national security and public order are being supplemented by strategic technologies and the ownership of databases on citizens. In France, the Minister of the Economy, Bruno Le Maire, announced that he wanted to extend this to the storage of digital data and to artificial intelligence. In Germany, the acquisition of Kuka, the manufacturer of industrial robots by the Chinese firm Midea, has led to strengthening German controls, and in particular the refusal of the purchase of the Aixtron semiconductor maker.

In the United States, it is on the grounds of the acquisition of banking data that the acquisition of MoneyGram by Ant Financial – an offshoot of Alibaba – led the Committee on Foreign Investment of the United States (CFIUS) to issue a negative opinion very recently. The European project to create a committee identical to the CFIUS has not yet been concluded, and it has not attracted the support of all EU members as some look kindly on Chinese investors.

This policy, while not coordinated, is at least common among the main recipients of Chinese investment. France is not the only one to hold this position. This kind of unanimity among the Western clan is rare, but it also involves risks.

The first is isolationism: too many barriers lead to giving up partnership opportunities, which in some areas are increasingly unavoidable, as well as opportunities for strengthening Western companies. The second is the risk that equity bans will be circumvented by Chinese investors. Acquisitions are not always hostile, and companies that are being acquired are often ready for partnerships that can take other forms. Thus the failure of the merger of Alibaba with the American MoneyGram was offset by numerous agreements that the company sealed with European and American partners to facilitate the payments of Chinese tourists, in particular to allow the use of the Alipay payment platform. It will certainly seal a partnership of this type with MoneyGram. These partnerships lead to technology transfers and to sharing skills, or even data, without the counterpart of capital inflows. The third risk concerns the flow of Chinese capital into Asia and/or Africa, for example, allowing the capture of markets and resources that will handicap Western firms. Any Chinese capital available will have to be invested. The absence of Western partners will imply a loss of control and isolation that could be detrimental.

It is thus necessary to come back to the use of well-chosen but demanding controls, which are absent from the dichotomous reasoning that prevailed in the Minister’s statements, if not his intentions. As long as French technology is attractive, this should be celebrated and the pluses and minuses of alliances need to be assessed. It will only be a matter of years before China’s technology becomes as attractive as France’s. And the Chinese will not fail to come and remind Mr. Le Maire of his position.

 




High-frequency trading and regulatory policies. A tale of market stability vs. market resilience

by Sandrine Jacob Leal and Mauro Napoletano

Over the past decades, high-frequency trading (HFT) has sharply increased in US and European markets. HFT represents a major challenge for regulatory authorities, partly because it encompasses a wide array of trading strategies (AFM (2010); SEC, 2010), and partly because of the big uncertainty yet surrounding the net benefits it has for financial markets (Lattemann and al. (2012); ESMA (2014); Aguilar, 2015). Furthermore, although HFT has been indicated as one potential cause of extreme events like flash crashes, no consensus has yet emerged about the fundamental causes of these extreme events. Some countries’ regulations have already accounted for HFT,[1] but, so far, this has led to divergent approaches across markets and regions.

Overall, the above-mentioned open issues call for a careful design of regulatory policies that could be effective in mitigating the negative effects of HFT and in hindering flash crashes and/or dampening their impact on markets. On these grounds, in a new research paper published in the Journal of Economic Behavior and Organization we contribute to the debate about the regulatory responses to flash crashes and to the potential negative externalities of HFT by studying the impact of a set of policy measures in an agent-based model (ABM) where flash crashes emerge endogenously. To this end, we extend the ABM developed in Jacob Leal et al. (2016) to allow for endogenous orders’ cancellation by high-frequency (HF) traders, and we then use the model as a test-bed for a number of policy interventions directed towards HFT. This model is particularly well-suited and relevant in this case because, differently from existing works (e.g., Brewer et al, 2013), it is able to endogenously generate flash crashes as the result of the interactions between low- and high-frequency traders. Moreover, compared to the existing literature, we consider a broader set of policies, also of various natures. The list includes market design policies (circuit breakers) as well as command-and-control (minimum-resting times) and market-based (cancellation fees, financial transaction tax) measures.

After checking the ability of the model to reproduce the main stylized facts of financial markets, we run extensive Monte-Carlo experiments to test the effectiveness of the above set of policies which have been proposed and implemented both in Europe and in the US to curb HFT and to prevent flash crashes.

Computer simulations show that slowing down high-frequency traders, by preventing them from frequently and rapidly cancelling their orders, with the introduction of either minimum resting times or cancellation fees, has beneficial effects on market volatility and on the occurrence of flash crashes. Also discouraging HFT via the introduction of a financial transaction tax produces similar outcomes (although the magnitude of the effects is smaller). All these policies impose a speed limit on trading and are valid tools to cope with volatility and the occurrence of flash crashes. This finding confirms the conjectures in Haldane (2011) about the need of tackling the “race to zero” of HF traders in order to improve financial stability. At the same time, we find that all these policies imply a longer duration of flash crashes, and thus a slower price recovery to normal levels. Furthermore, the results regarding the implementation of circuit breakers are mixed. On the one hand, the introduction of an ex-ante circuit breaker markedly reduces price volatility and completely removes flash crashes. This is merely explained by the fact that this type of regulatory design precludes the huge price drop, source of the flash crash. On the other hand, ex-post circuit breakers do not have any particular effect on market volatility, nor on the number of flash crashes. Moreover, they increase the duration of flash crashes.

To sum up, our results indicate the presence of a fundamental trade-off characterizing HFT-targeted policies, namely one between market stability and market resilience. Policies that improve market stability – in terms of lower volatility and incidence of flash crashes – also imply a deterioration of market resilience – in terms of lower ability of the market price to quickly recover after a crash. This trade-off is explained by the dual role that HFT plays in the flash crash dynamics of our model. On the one hand, HFT is the source of flash crashes by occasionally creating large bid-ask spreads and concentrating orders on the sell side of the book. On the other hand, HFT plays a positive role in the recovery from the crash by contributing to quickly restore liquidity.

 

 

 

 

 

 

[1] Some unprecedented actions and investigations by local regulators were widely reported in the press (Le Figaro, 2011; Les Echos, 2011; 2014; Le Monde, 2013; Le Point, 2015).