2024-2025 World Economic Outlook: EUROPE TAKES OFF

OFCE Analysis and Forecasting Department, Éric Heyer (dir.) and Xavier Timbeau (dir.) [1]

This text is the summary of the Outlook for the World Economy realized in the spring of 2024 by the international team and published in a French version (OFCE Policy brief, n° 125). Concerning the analysis and forecast for the French Economy, the 2024-2025 Outlook is published in an English version .



While the United States still escapes the slowdown, the economic situation in the European countries remains deteriorated, accentuating the gap that has appeared from the start of the Covid  crisis. Beyond the differences in potential growth between countries, these differences are notably linked to the impact of the energy crisis, which is greater in Europe than in the US, and to the direction of fiscal policy since 2020. These differences are not expected to narrow in the short term. Surveys and the first economic data available at the start of the year draw a picture of contrasts between the major industrial countries, leading us to forecast a further contraction in Germany’s GDP for the first quarter (-0.2%), a slightly positive growth in the United Kingdom. At the same time, Spain and the United States are likely to remain on course in the short term.

In industrialized countries, particularly in Europe, growth is set to rebound to 1.7% in 2025, Activity would be supported by the easing of monetary policy. The convergence of inflation towards the 2% target would effectively lead central banks to cut interest rates from mid-2024. Conversely, the level of budget deficits and public debt will lead many governments to take consolidation measures.

In emerging countries, growth will remain stable in 2024 and 2025. In China, growth should resist despite the crisis in the real estate sector. The economic indicators point to some acceleration in production, and we forecast annual growth of 4.7% in 2024. In India, activity would slow down compared with 2023, rising by around 6.5%. In emerging Asian countries (excluding China), growth is expected to continue at the same pace as in 2023.  In Latin America, we forecast a slowdown to 1,1 %, before a rebound to 2 % in 2025. Global growth would reach 2.8% in 2025, 0.2 point above its 2024 level.


[1] This analysis is based on the work of the international team, which is led by Christophe Blot and composed of Céline Antonin, Amel Falah, Sabine Le Bayon, Catherine Mathieu, Hervé Péléraux, Christine Rifflart, Benoît Williatte. The forecast is based on information available as of 5 April 2024.




Where does the European Union stand?

By Robert Boyer, Director of Studies at EHESS and the Institut des Amériques

Speech at the “European Political Economy and European Democracy” seminar on June 23, 2023, at Sciences Po Paris, as part of the ‘Théorie et Economie Politique de l’Europe’ seminar, organized by Cevipof and OFCE.



The aim of the first study day of the Theory and Political Economy of Europe seminar is to collectively engage in a work of overall theoretical reflection, following on from the thematic sessions of 2022, by continuing the multidisciplinary spirit of the seminar. The aim is to begin outlining the contours of the two major blocks of European political economy and European democracy and to identify the points of articulation between them. And to prepare for multidisciplinary writing with several hands.

An apparent paradox

During the various and rich interventions pointing out the shortcomings, dilemmas, and contradictions that characterize the processes of European integration, a central question seems to emerge:

“How has a politico-economic regime in permanent disequilibrium, which has become very complex, been able, until now, to overcome a large number of crises, some of which threatened its very existence?”

A brief review of the current situation is enlightening and makes it more necessary to seek out the factors likely to explain this resilience, which never ceases to surprise researchers and specialists, foremost among them many economists. In the face of a succession and accumulation of poly-crises and rising uncertainties, is it reasonable to anticipate that the European Union (EU) will continue its current course, protected by the mobilization of the processes that have ensured its survival, not least thanks to the responsiveness demonstrated by both the European Central Bank (ECB) and the European Commission since 2011?

Baroque architecture full of inconsistencies

The various speakers highlighted many of them:

  • The European Parliament is a curiosity: it is an assembly with no fiscal powers. Would giving it this power be enough to restore the image of democracy on a European scale?
  • The EU issues a common debt even though it has no direct power of taxation: isn’t this a call for an embryonic federal state? Is there a political consensus on this path?
  • This debt corresponds to the financing of the Next Generation EU plan, which recognizes the need for solidarity with the most fragile countries, in response to a common “shock” that does not lend itself to the moral hazard so feared by the frugal countries of the North. Yet it is the result of an ambiguous compromise, with two opposing interpretations: an exception that must not be repeated for the North, and a founding, Hamiltonian moment for the South.
  • It is not very functional or democratic for the European Parliament to vote on Community expenditure, but for national parliaments to vote on revenue.
  • Does it make sense to have a multiannual program adopted by an outgoing assembly of the European Parliament, which will then be binding on the next one?
  • The ceiling set for the European budget limits the financing of European public goods, which should compensate for and go beyond the limitation on the supply of national public goods in the application of the criteria governing national public deficits and debts.
  • At the European level, the quest for more democracy tends to focus on the question of political control over the Commission and the ECB, whereas social democracy has in the past been a critical component in the legitimacy of governments at the national level.
  • The same applies to the question of corporate governance in Europe, a forgotten issue on the European agenda that is regaining a certain interest in the face of the transformations brought about by digital technology and the environment.
  • Competition policy is often perceived by economists as one of the Commission’s key instruments since it is an integral part of the construction of the single market. Yet legal analysis shows that competition is not a categorical imperative, defined finally, but a functional concept that evolves over time. So much so, that the Commission can declare that today it is at the service of the environment.
  • The Commission is usually criticized for its role as a defender of the acquis, its taste for excessive regulation, its technocratic approach, and its inertia. And yet, since 2011, it has continued to innovate in response to successive crises, to the point of having relaunched European integration.
  • The ECB was founded as the embodiment of an independent, typically conservative central bank, with a monetarist conception of inflation. And yet, without changing European treaties, the ECB has been able to innovate and effectively defend the Euro.
  • The EU Court of Justice and national constitutional courts do not have the same interests and legal conceptions, but so far, no head-on conflict has produced a blockage in European integration. Is this sustainable?
  • Is the distribution of competencies, fixed by the treaties and de facto adjusted as problems and crises arise, satisfactory and up to the challenges of the industry, the environment, public health, and solidarity in a dangerous and uncertain international environment?
  • The “European Constitution” is not a constitution, because integration has proceeded via a series of international treaties. How can we explain the fact that these treaties have been imposed when member countries could have coordinated through the OECD, EFTA, the IMF, or ad hoc agreements (European Space Agency, Airbus, Schengen) with no overall architecture?

Reasons for surprising resilience

We need to identify the factors that can account for the perseverance that lies at the heart of continental integration and ask ourselves whether they are sufficiently powerful to overcome the current multi-crises.

  • From the outset, the project was a political one, aimed at halting Europe’s decline in the wake of the two world wars. But in the absence of political agreement on a common defense, the coordination of economic reconstruction was seen as a means to this end. In this respect, Russia’s invasion of Ukraine has strengthened ties between governments, even if it means inverting the hierarchy between geopolitics and economics and bringing back to the forefront the possibility of Europe as a power.
  • Conflicts of interest between nation-states are at the root of a succession of crises, which are overcome by ad hoc compromises that never cease to create further imbalances and inconsistencies, which in turn lead to another crisis. In a way, the perception of incoherence and incompleteness is a recurring feature of European construction. However, the configuration can become so complex and difficult to understand that it can overwhelm the inventiveness of the collectives that are the various EU entities and their ability to coordinate. By way of example, a genuine EU macroeconomic theory has yet to be invented, and this is a major obstacle to the progress of integration.
  • European time is not homogeneous. Periods when new procedures are put in place after a breakthrough give the impression of bureaucratic, technocratic management at a distance from what citizens are experiencing. By contrast, open crises forbid the status quo, as the very existence of institutional construction is at stake, with the stratification of a large number of projects and their incorporation into European law. This experience of trial and error is the breeding ground that enables the Commission, for example, to devise solutions to emerging problems. As a result, the equivalent of an organic intellectual seems to have emerged from this collective learning over an extended period. This is one interpretation of the paradoxes mentioned above.
  • European Councils, the Court of Justice, the ECB, and the European Parliament all play their part in this movement, but it is undoubtedly the European Commission that in a sense represents the European, if not the general, interest. The fact that it has the power to initiate regulations and manage procedures gives it an advantage over other bodies. Indeed, many governments would be satisfied with inter-state negotiations, with no common ground to build on, and would go it alone. Failure to find a compromise solution would mean the simple disappearance of the EU. Similarly, without the “whatever it takes” approach, the ECB would have disappeared with the Euro. The major crises offer a strong incentive to move beyond dogmatic posturing in favor of a re-hierarchization of objectives and the invention of new instruments.
  • Finally, there are two sides to the proliferation of regulations, procedures, and European agencies attached to the Commission. On the one hand, they give rise to the diagnosis of poorly controlled management and the harsh judgments of defenders of national sovereignty. On the other hand, they are also factors in the reduction of uncertainty and the creation of regularities that coordinate expectations in a context where financial logic generates bubbles and macroeconomic instability.  In a way, a certain redundancy in a myriad of interventions is a guarantee of resilience. The European Stability Mechanism (ESM), for example, was a way of circumventing the ECB’s delay in recognizing the need for vigorous intervention. So the complexity of the EU can also mean redundancy and resilience.
  • Political power plays a crucial role in the development of European institutions. It intervenes in the framework of councils and summits. So far, in the national political arena, governments favoring further integration have prevailed: this is sometimes one of the only markers of their policy that survives the various periods. As a result, a collapse of the EU could mean the loss of their credibility. It would be dramatic for a government to be held responsible for the failure of a project that has been built up over decades. This is perhaps a hidden source of the permanence of European institutions. What is more, “Brexit” far from marking the end of the EU has rather closed ranks, especially as the expected benefits for the UK have not manifested themselves. Beware, however, that the polarization and division of societies between the winners and losers of trans nationalization has favored the breakthrough of parties defending strong national sovereignty, i.e. a countertrend that forbids prolonging the hypothesis of a lasting hegemony of pro-European parties.
  • Finally, the succession of financial crises, the return of pandemics, the harshness of the confrontation – not only economic – between the United States and China, the growing awareness of the environmental emergency, and the installation of a new inflation generated by recurring scarcities, which risks being aggravated by the transition to a war economy, are all factors in a dual awareness. On the one hand, common interests tend to outweigh disagreements between member countries. On the other hand, each of them carries little weight in the confrontation with the United States, which has become openly protectionist, and China, with its dynamism in emerging productive paradigms. The EU needs to be a geo-economic and political player in its own right. This explains the Commission’s activism since Covid-19. Citizens have benefited from this new impetus, with a common strategy on vaccines, for example. For their part, the governments of the most fragile economies have benefited from European solidarity, which has counterbalanced the principle of regional competition.    

Historical bifurcation, polycentric governance, or nationalist withdrawal?

The processes described above can recombine to form a wide variety of trajectories. Prediction is not possible, as it is the strategic interactions between collective actors that will determine how to overcome the EU’s various crises. It is possible to imagine three more or less coherent scenarios.

  • Towards an original federalism disguised by a myriad of technical coordination procedures

This first scenario is based on three central assumptions. Firstly, it marks the end of reliance on neo-functionalism, whereby governments must be the servants of the necessities imposed by economic interdependence between nation-states (figure 1). The sphere of politics pursues its objectives, even if governments must contend with economic logic. Secondly, it draws the consequences of technological, geopolitical, health, and environmental transformations that threaten the stability of societies and the viability of their socio-economic regimes. Pooling resources increases the chances of success for all participants in European programs. Finally, this first scenario extends the trends already observed since the outbreak of the pandemic.

As far as the word federalism has a repulsive effect on public opinion, which is influenced by populist nationalism, the practice of enhanced cooperation does not have to be accompanied by an appeal to the federalist ideal. Instead, skillful rhetoric must convince citizens that the EU ensures their protection and opens new common goods. These advances in no way subtract from the social, economic, and political rights guaranteed at the national level. Charismatic politicians must be able to resist anti-EU rhetoric that feeds on the relative powerlessness of national authorities overwhelmed by transnational forces beyond their control.

  • Adapting polycentric governance at the margins, far from a Europe of power

This second scenario, on the other hand, assumes that the current period will be one of continuity with the long-term trajectory of European integration. The polycentrism of EU entities is a vector of pragmatic adaptability to emerging issues, without the need to centralize power in Brussels, as suggested by the diversity of European agency locations. Trial and error, the multiplication of ad hoc procedures, and the possible use of enhanced cooperation on issues involving a fraction of member countries are all sources of adaptation in the face of the repetition of events potentially unfavorable to the EU.

This considers the fact that negotiating new European treaties seems a perilous mission, that public opinion judges the EU on the basis of its contribution to the well-being of its populations rather than the transparency and coherence of its governance, and that an imperial conception is illusory. One might be tempted to invoke a form of catallaxy applied not to the economy and the market, but to the political sphere: the interaction of highly varied processes, without central authority, eventually leads to a roughly and provisionally viable configuration. The English expression “muddling through” aptly captures this pragmatism, marked by the renunciation by public decision-makers of the need to spell out an objective and a goal, if only to persevere in being.

Success is not guaranteed. Firstly, past successes are no guarantee of their continuation into the future. Secondly, there is no guarantee that a pragmatic solution will be found in the face of an avalanche of unfavorable events since the affirmation of an objective may prove to be a necessary condition for lifting the prevailing uncertainty as to the outcome of both institutional and economic crises. Last but not least, how can we politically legitimize an order whose logic and nature elude decision-makers? Isn’t this powerlessness the breeding ground for populist voluntarism?

  • National and European elections: a nationalist majority redesigns a different Europe

This third scenario is based on an analysis of changes in the objectives of government following recent elections in Europe. Both in the South (Italy) and in the Scandinavian countries (Finland, Sweden, Denmark), coalitions have come to power dominated by parties opposed to immigration, defenders of national identity, and, in short, reluctant to delegate new powers to the EU. In this, they join the authoritarian, nationalist governments of Central Europe (Hungary, Poland). In the European Parliament elections of 2024, could this movement result in the loss of a majority in favor of the EU’s current policies, to the benefit of a new majority bringing together nationalist parties that are very diverse, but share the same obsession: to block the extension of EU competences and repatriate as many of them as possible to the national level?

Russia’s war against Ukraine has brought the imperative of defense to the fore, an area in which the EU has made little progress. Does not this mean that NATO is becoming central to the political organization of the old continent, to the detriment of the economic objectives pursued by European integration?   

These hypotheses, derived from the 23 June 2023 CEVIPOF and OFCE meeting?  call for a follow-up, as the questions to be clarified are so many and quite difficult indeed. Cross-disciplinary analysis is more necessary than ever.




A second Hamiltonian moment

Par Hubert Kempf

In the European debate surrounding the Next Generation EU plan, the European Commission’s decision in 2020 to issue debt for the benefit of the Member States is often compared to the decision taken by the US federal government in 1790, under the impetus of Treasury Secretary Alexander Hamilton, not only to honor the outstanding federal debt but also to assume the debts of the federated states. This comparison is specious.  Hamilton’s financial policy went hand in hand with the ability to raise the taxes needed to service the debt, made possible by the use of military force. This is in stark contrast to the situation in the European Union, where the Commission has no coercive powers whatsoever.



The European Council’s decision (of 21 June 2020, confirmed on 14 December 2020) to authorize the European Commission to respond to the crisis opened up by the Covid-19 pandemic with a 750 billion debt issuance program in order to lend at low rates or make unrequited transfers to the Member States represents a political and economic innovation that cannot be underestimated or ignored. Many commentators have hailed it as the “Hamiltonian moment” of the European Union. The expression was coined in 2011 by Paul Volcker, former Chairman of the US Federal Reserve from 1979 to 1983 and then Chairman of the Economic Recovery Advisory Board appointed by Barack Obama. Referring to the situation in Europe, Paul Volcker said “Europe is at an Alexander Hamilton moment, but there’s no Alexander Hamilton in sight[1] .

The expression has become popular and has been used by many commentators, journalists and politicians. It refers to the budgetary and fiscal policy proposed, negotiated and implemented by Alexander Hamilton in 1790[2]. Appointed by George Washington as Secretary of the Treasury on 11 September 1789, after Congress had created the post on 2 September, Hamilton immediately set about drafting a report that became a landmark in American history. In this report[3],  Hamilton proposed not to default on the outstanding federal debt, to apply the same treatment to all holders of federal debt securities, regardless of when they were acquired, and to transfer the outstanding debts of the federated states to the federal government. 

However experts discuss the relevance of the parallel drawn between the decision on federal public finances taken by the American Congress in 1790 and the announcements made by the European Commission in 2020. They conclude that the programs and circumstances differ so substantially as to render this parallel[4] meaningless. These discussions, centered on economic considerations, are useful. But they miss the critical point: the political impact of these acts.

No one disputes the importance of Alexander Hamilton’s fiscal and financial policy in American political history. For three reasons:

1/ an immediate and spectacular recovery in the creditworthiness of the US federal and state governments on the international financial markets;

2/ the structuring of the American political debate between the Federalists and the Republicans at the time, which continues today where references to the Hamiltonian and Jeffersonian traditions are still very much alive[5] ;

3/ Hamilton’s intellectual power, which led him to develop an analysis of the workings of the financial markets that was far ahead of its time[6].

As for the significance to be attached to the announcements by the European authorities, at the risk of being contradicted by future developments, let us say that it is relevant to see in these announcements an obvious innovation: it is now openly accepted by all the countries of the European Union that the European Commission can exercise significant budgetary powers in the event of exceptional circumstances (without any precise definition of what exceptional circumstances are). What’s more, the principle of conditionality for aid granted to Member States is also endorsed by the European Council, which clearly puts the European Commission in the position of an umpire and gives it discretionary power over Member States. But these developments are more of an expedient, and do not result in any change in the institutional relationship of power between the Member States and the Union’s bodies (the European authorities).

From this perspective, it is reasonable to refer to the Hamiltonian moment of 1790 in order to assess how innovative the 2020 decision is. In both cases, there is a budgetary decision that modifies the financial relationships between the member jurisdictions of the unions. More specifically, the federal level in the case of the United States, and the supra-state level in the case of Europe, assume responsibilities that were or could have been the responsibility of the federated or national Treasuries of the union. It is clear that this advance may involve a major, if not radical, change in the political relations between jurisdictions.

But this point of comparison alone is not enough. If Hamilton’s fiscal and financial program has been the undisputed success that it is acknowledged to be, this is neither due solely to the passage of the law, nor to its translation into complex financial regulations.

To understand this, we need to single out a second “Hamiltonian moment”. This moment took place in 1794, during the “whiskey rebellion” that shook the west of the 13 American states that then made up the United States[7].

This rebellion[8] stems from the law passed by Congress in 1789 stipulating that excise duties could be levied by the federal state. Note immediately the difference with the European case: as soon as the Constitution had been adopted (after its ratification by 9 of the 13 American states), the first Congress exercised its right to levy the tax granted to it by the Constitution, unlike what was provided for in the Articles of Confederation. This right is not available to the European Parliament, let alone the Commission. As early as 1790, Hamilton proposed levying a tax on whiskey. This was a logical choice: whiskey was an ideal product to levy a tax on at a time when communication routes were difficult and trade within the Union was limited. A non-perishable and transportable product, it concentrated in a small volume a large but perishable agricultural production and was easy to trade. It was also easy to control ˗ and therefore to tax ˗ because there were few crossing points. But its production is concentrated in a few counties in the western part of a few states, whereas it was consumed throughout the country.  The proposed tax was therefore seen by whiskey producers as a major discrimination against them, since they would be the only ones to bear it to the benefit of the entire Union.  Congress, aware of the problem so created, refused to pass the law. It did, however, pass it the following year, a year after the law on the regularization of public debts, in view of the need to fill the federal government’s coffers, in particular to assume the burden of the federal debt increased by its decision of 1790.

It wasn’t long before unrest began to take hold from 1791 onwards, especially in the western counties of Pennsylvania, encouraged by opponents of the Federalist party led by Hamilton. The tensions soon became a political issue, pitting the Federalists, supporters of a strong, interventionist state controlled by the social and educated elites, against the Anti-Federalists, who were to form the core of the Republican party led by Jefferson. The Federalists, then in power, felt that the authority of the (federal) state was in question and that this was a prodrome of the return to the anarchy that prevailed before the vote on the Constitution of 1789. According to Hamilton, it was becoming urgent to take action against the rebels, but George Washington, the President and, as such, head of the army, delayed.

In August 1794, the refusal of the tax led almost 6,000 armed opponents to mobilize. They were soon on the point of taking control of Pittsburgh. After yet another failed attempt at conciliation, Washington decided to take military action against the rebels. It ordered the raising of 14,000 militiamen from New Jersey, Maryland, Virginia and Pennsylvania.  Faced with such a deployment of force (larger than the continental army that had held out against the British), the rebellion immediately collapsed. The rebels dispersed. The leaders were arrested and put on trial. Two were sentenced to hanging and finally pardoned by Washington. The conclusion of the affair was drawn by Hamilton: “The insurrection in the end will have benefited us and added to the solidity of everything in this country”[9]. This was particularly true for the financial soundness of the federal state.

This second moment sheds light on the first moment of 1790, that of the drafting of Hamilton’s report and the adoption of the law he submitted to Congress. There were two reasons for the speed and determination with which Hamilton conceived his budgetary and financial policy, in addition to the catastrophic financial situation in which the young republic found itself, its credit then at an all-time low. The first, acknowledged by historians, financial professionals and politicians alike, was his expertise in these matters, exceptional for the time, which led him to devise a bold and complex plan. This plan was little, if at all, understood by his contemporaries and in particular his opponents, led by John Adams and Thomas Jefferson, but it is easily understood today when it is recognized that financial credibility (defined as the temporal coherence of a debt plan) is a central element in the determination of interest rates. The second, just as important, is that Hamilton was confident in the capacity of the federal state of raising the tax revenues needed to service the debt. This implies being able to levy taxes effectively. Hamilton had been a brilliant officer in the War of Independence, noted by Washington for his bravery and military intelligence, so much so that he made him his aide-de-camp and was thus able to measure his intellectual, political and military qualities. Hamilton knew the power of guns[10] as well as the weight of words. In the face of the tax rebellion, he did not hesitate to advocate the exercise of the federal government’s monopoly on legitimate violence and convinced the President to quell the rebellion in the West.

This second moment at the end of the eighteenth century is exemplary of the ability of the nascent American federal state to balance its budget, to service its debt, even when augmented by the debts of the states, and thus to avoid default. Without this ability, it is doubtful that the stroke of genius attempted by Hamilton in 1790 would have been so successful.

This episode cruelly highlights the difference with the European situation in the 2020s. At no time, and for good reason, did the President of the Commission clearly mention how the debt issued would be repaid. A fortiori, she was unable to declare that the European Union would levy taxes ˗ it does not have the power to do so ˗ or that she would, if necessary, mobilize the means of coercion and constraint on recalcitrant Europeans, since she has none at her disposal.

It is easy to understand why European “federalists” (so designating supporters of strong supranational European institutions, for want of a better name) have seized on the expression “Hamiltonian moment” to describe the European Commission’s adoption of its recovery plan. Placing itself under the prestigious patronage of Hamilton, and comparing this plan with the proposals made to Congress in 1790 and brilliantly defended by Hamilton, makes it possible to suggest that the European Union, more than two centuries apart, is following a fairly similar path to that taken by the American republic, namely the gradual but obstinate constitution of a federation, a hierarchical inter-governmental entity dominated by the federal state. But this is to take too much liberty with history and to pay more lip service to it than to reality.

The history of the American union is very different from the history of the European union. The American union was born in 1787-1789 from the realization that the confederation born in 1776 was failing, due to the inability of the American states to cooperate effectively. From the outset, it was characterized by a desire for the pre-eminence of the federal state. It certainly took time for the federal state to establish itself and realize its full potential. The relationship between the federal state and the federated states is always subject to change. We are currently witnessing a wave of promotion of the federated states, in particular by the current Supreme Court. But such movements are not new and do not significantly alter the political, social and economic dominance of the federal state[11]. This should come as no surprise: this pre-eminence is enshrined in the founding texts of the American republic and can be seen in the political twists and turns of its early years, as is clearly demonstrated by the policies sought and promoted by its most brilliant and effective leader, Alexander Hamilton. This is clearly shown by the two ‘Hamiltonian moments’ of the 1790s, which cannot be thought of in isolation from each other. The first Hamiltonian moment induces us to compare Hamilton’s American fiscal policy at the end of the eighteenth century with the European announcements of 2020 in response to the Covid-19 pandemic. The second Hamiltonian moment, however, makes it easier to see the differences between the two sequences, and illustrates how American federalism is not a prefiguration of developments in the European Union. The early years of the American republic, far from highlighting a congruence between American destiny and European trial and error, instead show their marked differences. The construction of Europe had nothing to do with the founding of the United States and did not follow the federalist path followed by the latter.

In short, one moment is not enough to make history. European leaders and citizens would be well advised not to forget this lesson from the early days of the American federation.


[1] See Wheatley (2012), “Analysis: What Europe can learn from Alexander Hamilton”. Reuters.

[2]The reference biography on Hamilton is Chesnow, Ron (2005), Alexander Hamilton, Penguin Books…

[3] Hamilton, Alexander (1790), Report Relative to a Provision for the Support of Public Credit, U.S. Treasury Department

[4]See in particular the very detailed contribution by Elie Cohen (2020). See also Issing (2020) and Gheorghiu (2022).

[5] See Banning, Lance (1980), The Jeffersonian persuasion: Evolution of a party ideology. Cornell University Press.

[6] Thomas Sargent (2012) has no difficulty in interpreting Hamilton’s thinking and actions in the terms of the most recent economic theory, born of the rational expectations revolution and the notion of temporal coherence.

[7]See Krom and Krom (2013).

[8]We follow the developments dedicated to the rebellion by Gordon S. Wood (2009), Empire of liberty. A history of the Early Republic, 1789-1815, Oxford University Press, pp. 134-139.

[9]Alexander Hamilton to Angelica Church, 23 October 1794, Papers of Alexander Hamilton, Vol. 17, p.340, quoted in Wood (2009), p.138.

[10]“Ultima ratio regum”, as others before him had claimed.

[11]I was already arguing this in the 1980s, proving that the issue is not new. Cf. Kempf and Toinet (1980).




Inequality and macroeconomic models

By Stéphane Auray and Aurélien Eyquem

“All models are wrong, some are useful.” This quote from George Box has often been used to justify the simplistic assumptions made in macroeconomic models. One of these has long been criticised: the fact that the behaviour of households, although differing (heterogeneous) in their individual characteristics (age, profession, gender, income, wealth, state of health, labour market status), can be approximated at the macroeconomic level by that of a so-called “representative” agent. This assumption of a representative agent means considering that the heterogeneity of agents and the resulting inequalities are of little importance for aggregate fluctuations.



Economists are not blind – they are well aware that households, companies and banks are not all identical. Many studies have looked at the effects of household heterogeneity on aggregate savings and, consequently, on macroeconomic fluctuations[1]. On the other hand, some studies propose so-called “overlapping generations” models in which age plays an important role[2].

Most often, households in these models move from one state to another (from employment to unemployment, from one level of skills and therefore of income to another, from one age to another) and the probabilities of a transition are known. In the absence of insurance mechanisms (unemployment, redistribution, health), the expected risk of a transition produces an expected risk of income or health, which leads agents to save in order to insure themselves. Furthermore, differences in savings and consumption behaviour are also likely to lead to differences in labour supply behaviour. Finally, changes in the macroeconomic environment (changes in the unemployment rate, interest rates, wages, taxes and contributions, public spending, insurance schemes) potentially affect these individual probabilities and the resulting microeconomic behaviour. Aggregate risks therefore affect each household differently, depending on its characteristics, generating general equilibrium and redistributive effects. However, this relatively old work has come up against two obstacles.

The first is technical: tracking the evolution of the distribution of agents over time is mathematically complex. It is of course possible to reduce the extent of the heterogeneity by limiting ourselves to two agents (or two types of agent): those with access to the financial markets and those who are forced to consume their income at each period[3], working people and pensioners, etc. But while these simplified models make it possible to understand and validate broad intuitions, they are still limited, particularly from an empirical point of view. They do not, for example, allow us to carry out a realistic study of changes in inequality across the entire distribution of income or wealth.

The second obstacle is more profound: several of these studies have concluded that models with heterogeneous agents, although much more complex to manipulate, did not perform significantly better than models with representative agents in terms of aggregate macroeconomic validation (Krusell and Smith, 1998). Admittedly, they were not aiming to study changes in inequality or the macroeconomic impact, but rather the contribution of agent heterogeneity to aggregate dynamics. In fact, the subject of inequality has long been considered to be almost or fully orthogonal to macroeconomic analysis (at least when considering fluctuations) and to fall more within the remit of labour economics, microeconomics or collective choice theory. As a result, heterogeneous agent models have long suffered from the image of being an unnecessarily complex subject in the macroeconomic analysis of fluctuations.

In recent years, these models have undergone an exceptional revival, to the point where they seem to be becoming the standard for macroeconomic analysis. The first obstacle has been overcome by an exponential increase in the computing power used to solve and simulate these models, combined with the development of powerful mathematical tools that render their solution easier (Achdou et al., 2022). The second obstacle has been overcome by the three-pronged movement that we describe below: the growing body of work (particularly empirical work) demonstrating the importance of income and wealth inequalities for issues typically addressed by macroeconomics – over and above their intrinsic interest; the development of tools for measuring inequalities that make it possible to reconcile them with macroeconomic analysis; and the refinement of the assumptions made in models with heterogeneous agents.

First, numerous empirical studies show that precautionary savings plays a major role in macroeconomic fluctuations (Gourinchas and Parker, 2001). But precautionary savings and the sensitivity of savings (and household spending) to income are not identical for all households. Indeed, empirical work suggests that the aggregate marginal propensity to consume (MPC) lies between 15% and 25% (Jappelli and Pistaferri, 2010), and that the MPC of a large proportion of the population is higher than the MPC obtained in representative agent models. In representative agent models at the top of the wealth distribution, the latter is approximately equal to the real interest rate, and therefore much lower than the empirical estimates (see Kaplan and Violante, 2022). It is therefore critical to understand the origin of a high aggregate MPC based on solid microeconomic foundations, particularly if we wish to carry out a realistic study of the impact of macroeconomic policies (monetary, fiscal, etc.) that rely on multiplier effects linked to the distribution of MPCs.

In recent years, an abundant and increasingly well-developed empirical literature has been dealing with issues relating to income inequality. Following the seminal article by Atkinson (1970) along with more recent developments[4], we now have long data series that measure income inequality before and after tax, along with wealth inequality, across the entire household distribution for a large number of countries. Finally, what are known as Distributional National Accounts make it possible to compare in great detail the predictions of macroeconomic models using heterogeneous agents with microeconomic data that are totally consistent with the framework of macroeconomic analysis.

Finally, the heterogeneous agent models themselves have evolved. The “first generation” models generally considered a single asset (physical capital, in other words, company shares) and prevented agents from taking on debt, which led them to save for precautionary reasons. These hypotheses were not able to explain why MPCs were high. They failed to  correctly replicate the observed distribution of income and, above all, of wealth. In reality, households have access to several assets (liquid savings, housing, equities), and the composition of their wealth differs greatly depending on the level of wealth: households generally start saving in liquid form, then invest their savings in property by taking out bank loans, and finally diversify their savings (only for those with the greatest wealth, above the 60th percentile of the wealth distribution) by buying shares (Auray, Eyquem, Goupille-Lebret and Garbinti, 2023). In doing so, a large proportion of the population ends up in debt in order to build up their property wealth, which is thus not very liquid. Although they have high incomes, many households consume almost all their income, which reduces their capacity for self-insurance through savings. This increases their MPC (and therefore the aggregate MPC) in line with empirical observations (Kaplan, Violante and Weidner, 2014).

Macroeconomists can now fully integrate the analysis of inequalities in income, wealth and health into models based on more realistic microeconomic behaviour. They can re-examine the consensus reached on the conduct of monetary[5] or fiscal[6] policies and examine their redistributive effects. They are also in a position to quantify the aggregate and redistributive effects of trade or environmental policies, which are or will be at the heart of their political acceptability – giving rise to new horizons for less wrong, more useful models.

[1] See in particular Bewley (1977), Campbell and Mankiw (1991), Aiyagari (1994), Krusell and Smith (1998), Castaneda, Diaz-Gimenez and Rios-Rull (1998).

[2] See the work of Allais (1947) and Samuelson (1958), and among others De Nardi (2004).

[3] See Campbell and Mankiw (1989) ; Bilbiie and Straub (2004) ; Gali, Lopez-Salido and Valles (2007).

[4] See (2001, 2003), Piketty and Saez (2003, 2006), Atkinson, Piketty and Saez (2011), Piketty, Saez and Zucman (2018) and Alvaredo et al. (2020).

[5] Kaplan, Moll and Violante (2018); Auclert (2019); Le Grand, Martin-Baillon and Ragot (2023).

[6] Heathcote (2005); Le Grand and Ragot (2022); Bayer, Born and Luetticke (2020).   




War in Ukraine: What short-term effects on the French economy?

by Xavier Ragot, with contributions from Céline Antonin, Elliot Aurissergues, Christophe Blot, Eric Heyer, Paul Malliet, Mathieu Plane, Raoul Sampognaro, Xavier Timbeau, Grégory Verdugo.

The purpose of this analysis is to open up discussion about how the war in Ukraine will affect the French economy. Such an assessment is of course uncertain, as it requires a forecast of diplomatic and military developments and in particular involves critical assumptions about sanctions and economic policy responses.

If consequences that are deemed negative are identified, this should not be read as a criticism of these policy choices, but rather as a contribution to how best to limit their negative impacts.

This document is intended as a summary and refers to relevant work for further consideration. Ongoing study will clarify the analyses and the relevant calculations.

The war in Ukraine will affect the French economy through eleven different channels.



I – The economic shock: Short-term effects

1) The first effect is of course on France’s energy bill

Increases in the price of gas and oil will reduce the purchasing power of French households and raise production costs for business. The gas price is the first unknown. The average daily price in 2019 was €14.6/MWh, before falling to €9.6/MWh in 2020 due to the pandemic. The price per MWh reached €210 on 10 March 2022!  This high level will not last. A level of €100/MWh is a realistic assumption, which would constitute a six-fold increase in price from 2019. Second, the higher gas prices will not be passed on to households immediately, because many contracts have expired (Antonin, 2022) and the government will wind up bearing part of the energy bill through the regulation of gas prices. However, the price increase on imports will be paid by domestic agents.

France imported 632 TWh of gas in 2019 and 533 TWh in 2020, as the pandemic slowed activity. But what counts most are net imports, which are lower. The cost of net gas imports in 2019 was €8.6 billion. Imports in 2022 will be affected by a possible economic slowdown but also by gas storehouses. For 2022, a working hypothesis could start from the level of net imports in 2019. Applying an increase of €85/MWh, this results in an additional cost of around €40 billion if the increase were to last one year. If the higher price were to last longer, then it would generate substitution effects in the medium term, as discussed below.

The price of oil is equally difficult to predict, as it depends on the behaviour of strategic players, such as OPEC. The price of a barrel of Brent crude fluctuated between USD 60 and USD 70 in 2019. It rose to USD 133 on 8 March, before falling back to USD 114 after OPEC announced a boost in production. The price of oil will, much like gas, depend on the sanctions on Russia; Russian crude represented around 10% of France’s purchases in 2020 and in 2019 constituted about 4.8% of the world’s known reserves. We could assume an average price of 110 dollars (or 100 euros, which is consistent with the EIA analysis). In 2019, France’s crude oil bill was €21.8 billion, to which must be added €13.3 billion of refined products. Assuming unchanged demand and using these same amounts, we end up with a total oil bill of 58.5 billion euros, i.e. an extra cost of 24 billion euros. The euro/dollar exchange rate could also fluctuate during the crisis, with a probable depreciation of the euro that is difficult to estimate at present. As a result, a constant exchange rate of 1.1 will be kept.

This increase will necessarily generate moves towards import substitution and reduction. These effects have been studied for the German economy (with references to the measures) by Bachman et al. (2022), who focus only on substitution effects. Using the literature (Ladandeira et al., 2017), they assume an elasticity of -0.2. In the case of a reduction in the quantity of gas and oil, how much residual capacity do firms have to produce? The answer to this question depends on assumptions about the extent energy can be substituted by other factors. Depending on these assumptions, all of which are realistic, the estimate for Germany ranges from 0.7 GDP points to 2.5 GDP points, or even more due to supply effects alone.

For France, a concrete example of substitution would be a reduction in heating: a 1° reduction in heating leads to a 7% reduction in gas consumption, i.e. a reduction of gas consumption by 4.2 billion m3, whereas 14.7 billion m3 of Russian gas is consumed.

The following table summarises estimates of how much price increases will raise costs, using various assumptions.

The table shows the uncertainty of the estimate depending on the duration of the price rise and the assumption of partial short-term substitution. The figure of 64 billion euros is close to three GDP points, which would be a significant shock to the French economy. A duration of six months with substitution behaviour would lead to a shock of one GDP point. Here we see the critical importance of political uncertainty.

2) Macroeconomic effect of rising energy costs

The primary effects of higher energy prices would be a reduction in household purchasing power, an increase in business production costs and higher costs to the state due to regulating prices. The impact on growth would proceed through complex mechanisms. As mentioned above, it occurs through substitution effects but also through the diffusion of energy prices to production prices and wages.

The OFCE has estimated the macroeconomic impact of a rise in energy prices in three different ways. First, by using two macroeconomic models, the emod.fr model, also used in forecasting, and the Threeme model, which breaks down energy consumption by sector (Antonin, Ducoudré, Péleraux, Rifflart, Saussay, 2015). Another strategy has been to use possibly non-linear econometrics (Heyer and Hubert, 2016 and Heyer and Hubert, 2020). Note that the latter work includes substitution possibilities measured by the elasticities mentioned above.

The results are as follows. In the model-based approach, a long-term oil price increase of 10 dollars leads to 0.1% to 0.15% less GDP growth and 0.6% inflation in the first year. With the econometric approach, a 10 dollar oil price increase reduces growth by 0.2% and leads to a 0.4% increase in inflation, with a relatively linear effect and a maximum impact after four quarters.

Because of the size of the shock, it is difficult to know whether to consider the high ranges because of the non-linearities or the low ranges because of a greater substitution effort and a fall in the savings rate. Furthermore, the estimate is made for oil and not for gas. For this reason, we will consider average effects, without seeking to maximise the fall in GDP. Thus, an increase of 40 dollars (compared to the situation in 2019), which is increased proportionally to take account of increases in the price of gas as well, leads to a fall in GDP of about 2.5 GDP points in the upper range and an increase in inflation of 3% to 4%. This amount corresponds to a multiplier for the negative shock on energy expenditure of -1. With unchanged business behaviour and unchanged public policy, this fall in GDP translates into a drop of the same order in market employment, so about 600,000 jobs (change compared with a non-war environment). In the low range (short duration and substitution), we obtain a fall in GDP five times smaller at 0.5 GDP points.

At this stage, this estimate does not take into account the effect of the conflict on other commodities, cereals or precious metals, which are of secondary importance compared to energy prices and are discussed by COFACE.

3) Uncertainty channel

Modelling the effect of the war in Ukraine depends heavily on the reaction of households and businesses to the uncertainty generated by the war. In an environment like this, the savings rate is expected to rise in the medium term (after purchases of basic necessities), which would aggravate the depth of a recession. However, after the Covid-19 crisis, households in France have an excess of savings of 12% of annual income (166 billion euros, OFCE Policy Brief no. 95), which they could dip into to pay the additional energy bill without changing their consumption habits. This attitude depends crucially on the perceived duration of the shock. A shock that is expected to last very long may lead to an additional increase in savings.

Companies’ wait-and-see attitude (before knowing which way markets are going) is leading to a downturn in investment. For business, the period of high uncertainty during the pandemic was marked by a good level of investment, partly due to public support (OFCE Policy Brief no. 95).

The third effect of the uncertainty channel is an increase in precautionary savings and a search for secure savings. As a result, savings are more likely to be directed towards safe assets, including public debt, and the real interest rate on France’s public debt may fall. After the outbreak of the conflict, rates did indeed fall in Germany (0.20 points), the United States (0.15), France (0.20), Italy (0.35) and Spain (0.2). In the longer term, how rates change will depend on how the policy of the European Central Bank (ECB) is perceived, which is discussed below. The search for safe assets will also cause the stock markets to fall and lead to negative effects on financial wealth, which won’t modify consumption in France much.

4) Redistributive effects

Higher energy prices will affect households differently and will disproportionately hit the poorest households with the lowest savings rates (Malliet, 2020).

There is considerable heterogeneity in the structure of spending on energy products. According to data from the 2017 Budget des familles survey conducted by INSEE, 10% of the consumption expenditure of the households in the poorest decile goes on electricity, gas and other fuel for the home and on fuel for transport. At the other end of the scale of living standards, households in the richest decile spend less than 7% on these items. On the other hand, Malliet (2020) shows that there is still considerable heterogeneity in the structure of consumption of these products even within a given decile. There is a significant proportion of the population that is highly exposed to certain energy prices, which requires that targeted measures be adopted that take into account this extraordinary exposure to certain goods for which – unless the household makes a major investment – there are few readily available substitutes.

The anti-redistributive aspect of a rise in energy prices therefore leads to a marked drop in the consumption of households with the lowest savings rate. This effect, in addition to the uncertainty channel, leads to a drop in aggregate demand and activity. Compensation for the loss of purchasing power induced by the rise in the price of oil and gas of 30% thus comes to 20 billion euros in the high range.

5) Destabilising financial effects

In addition to the average effect on interest rates, the sanctions that entail the exclusion of certain Russian banks from the Swift system is leading the banks to default on payments. Freezing the Russian central bank’s assets will generate difficulties that will probably lead to an explicit default on Russia’s public debt (a first since 1998) if the conflict continues for a few more weeks. According to the rating agencies, the risk of a sovereign default is imminent. A decree already allows for the repayment of the public debt to certain countries in roubles. The risk of a default on Russia’s debt is approaching one (measured by the CDS), and evaluations of the impact of sanctions on Russia’s debt point to a fall in GDP of between 7.5% and 10% in 2022 (Coface). The risk on Turkish and South African debt is also mounting.

The exposure of French and European banks and investment funds to Russian risk (public and private) is difficult to estimate because of possible contagion effects. The amount of external public debt is, however, low, estimated at USD 60 billion. The ECB can be trusted to intervene in the event of heightened financial instability, but the risk of a tightening of credit is likely.

The following graph shows the exposure to Russian risk by country, measured by residents’ consolidated position in Russian assets (Bank for International Settlements data).

We see that France’s exposure is high, at 22%, as is Italy’s. However, this exposure doesn’t include the possible contagion effects of financial crises.

II – Fiscal policy response

How the economy fares after such a shock will depend on the fiscal and monetary response.

6) Reception of refugees

First of all, while the primary purpose of taking in refugees obviously is not economic, this will generate expenditures that will probably be financed by debt and so will have an effect on activity. The experience of the last refugee crisis in 2016 leads to a first estimate. As Jean Pisani-Ferry notes, according to UNHCR analyses, Germany’s intake of 750,000 refugees in 2016 called for a budgetary effort of 9 billion euros, i.e. about 10 billion euros per million refugees. For an estimated 4 million refugees (given that currently the number is about 2.5 million), this leads to a temporary cost of 40 billion for Europe, which, on the scale of Europe, is not all that much but which for the countries hosting the most refugees, such as Poland, is huge.

The central question, however, is how to organise support for these millions of refugees. Gregory Verdugo has discussed the challenges for the European asylum system from 2019 and the integration of refugees. Note that the long-term impact of migration is positive, even if today’s refugees are mainly women and children. Of course these economic considerations are not central to how to support the refugees.

7) Support for the most vulnerable households

As noted, the rise in energy and food prices is strongly anti-redistributive and disproportionately affects the poorest households. For this reason, to offset the rise in inflation at the end of 2021, the French state has introduced an inflation allowance and exceptional support in the form of a €100 energy voucher, for a total estimated cost of €4.4 billion (€3.8 billion and €0.6 billion). The government has announced that it will spend €24 billion, or about 1 GDP point, to offset the rise in energy prices. This is the order of magnitude of the increase in the oil bill, without taking into account the increase in the price of gas. The OFCE Policy Brief on purchasing power, published on 17 March, deals with these issues.

This price increase will make the country poorer (negative supply shock) due to domestic dependence on energy imports. Responding to the shock with a wage increase is not a good solution, as it leads to higher prices and induced inflation, as companies in turn would face higher production costs. Support for vulnerable households should therefore be fiscal and not wage-based. The low interest rates on France’s public debt opens up some fiscal space that should be used temporarily.

8) Energy investment

Reducing dependence on Russian oil and gas (which will be compulsory if there is an embargo) will lead to additional investments. The recent IAE report on ending this dependence leads to “sobriety” measures but also to new investments, which are difficult to quantify for France at this time.

9) Military expenditure

Another consequence of the war in Ukraine will be higher military spending. This will lead to medium-term investments, the economic effect of which will depend on how it is financed (by debt or taxes). Germany has announced a package of 100 billion euros to be used in the short term. France, on the other hand, already has a higher level of military spending and at present is sticking with a policy of increasing military spending by 3 billion euros per year.

10) Europe and European fiscal rules

The war in Ukraine will most likely lead to the suspension of European fiscal rules for another year, until 2024. The establishment of a common European debt is under discussion, but the outcome remains uncertain.

III – European Central Bank and monetary policy

11) The ECB is in a difficult situation, as it faces rising energy prices, falling activity and high levels of public debt

One point needs to be clarified: the rise in energy prices will certainly push up the price index and therefore average prices, but this primarily involves domestic impoverishment. In other words, the ECB cannot fight this energy cost-driven price increase (which will also push European entities to find ways to reduce their energy dependence). This price increase will lead to inflation if wages and other prices start to rise continuously after this initial impulse. In other words, it is against possible second-round effects, not first-round effects, that the ECB needs to fight. In contrast to the 1970s shock, it is unlikely that the rise in energy prices will lead to an inflationary spiral, due to the de-indexation of wages. However, the way in which the SMIC, the French minimum wage, is indexed should push it higher. A fiscal effort on behalf of people paid the minimum wage to compensate for higher energy costs does, however, make less relevant the increase in the SMIC induced by higher energy prices.

However, the current difficulty concerns the existence of some second-round effects upon exiting the Covid-19 crisis (irrespective of the price of the war in Ukraine), as core inflation was already at 2.7% in February, above the 2% target. It is therefore important that the absorption of the energy price shock does not lead to self-sustaining price increases.

Second, the ECB will have to deal with a new wave of financial instability, with possible contagion in the financial system and rising interest rates in some countries.

Finally, the most likely outcome is that the ECB will take steps to support public policy. The point is not so much to stimulate demand, which would be inappropriate in this kind of environment, but rather to avoid interest rate hikes in some countries, as is suggested by a reading of its statements in the 10 March ECB press conference. Indeed, the statement of Thursday 10 March and the reduction in the volume of securities repurchases go hand in hand with a vigorous affirmation of the fight against the fragmentation of the euro zone, and therefore against the rise in interest rate spreadswhich could destabilise highly indebted countries such as Italy. Our reading therefore is of an ECB policy of risk reduction without support for demand, which seems justified during the military conflict.

Conclusion

The war in Ukraine is a massive income shock that, without a public response, would lead to a fall in GDP of 2.5% and a rise in inflation of 3% to 4% in the highest estimate of a long-term rise in prices, without behavioural changes, but also without taking into account financial instability. Considering the low range of a short conflict reduces these effects by three-quarters, to a fall of less than 1 GDP point.

  • Rising energy prices lead to anti-redistributive effects, which should lead in turn to budgetary efforts on behalf of poorer people.
  • As a result, government support of at least 1 GDP point is likely, limiting the fall in GDP but pushing inflation into the high range.
  • Financial instability is possible, which would substantially increase these effects, without taking into account of course any extension of the war into Europe outside Ukraine, which would completely change the method of estimation.



The essential, the useless and the harmful (part 3)

By Éloi Laurent

Is humanity a pest?
For the other beings of Nature who find it increasingly difficult to coexist
with humans on the planet, the answer is unambiguous: without a doubt.



Life on earth, 3.5
billion years old, can be estimated in different ways. One way is to assess the respective biomass of its components. It can then be seen that the total biomass on
Earth weighs around 550 Gt C (giga tonnes of carbon), of which 450 Gt C (or
80%) are plants, 70 Gt C (or 15%) are bacteria and only 0.3% are animals.
Within this last category, humans represent only 0.06 Gt C. And yet, the 7.6
billion people accounting for only 0.01% of life on the globe are on their own responsible
for the disappearance of more than 80% of all wild mammals and half of all plants.

This colossal crisis
in biodiversity caused by humanity, with premises dating back to the extermination of megafauna in the
prehistoric age

(Pleistocene), started with the entry into the regime of industrial growth in
the 1950s, with the onset of the “great acceleration“.

This is now well
documented: while nearly 2.5 million species (1.9 million animals and 400,000
plants) have been identified and named, convergent studies suggest that their
rate of extinction is currently 100 to 1000 times faster than the rhythms known
on Earth during the last 500 million years. This could mean that, due to human
expansion, biodiversity is on the brink of a sixth mass extinction. Whether we
observe these dynamics in section or longitudinally, at the level of certain key species in certain regions or by turning to more or less convincing
hypotheses on the total
potential biodiversity sheltered by the Biosphere
(which could amount to 8 million species), the conclusion
is obvious: while humans are thriving, the other species are withering away,
with the exception of those that are directly useful to people.

But this destruction
of biodiversity is of course also an existential problem for humans themselves.
According to a causal chain formalized two decades ago during an evaluation of ecosystems for the millennium, biodiversity underpins the proper functioning of
ecosystems, which provide humans with “ecosystem services” that support their
well-being (recent literature evokes in a broader and less instrumental way
“the contributions of Nature“). This logic naturally also holds in
reverse: when humans destroy biodiversity, as they are massively doing today
through their agricultural systems,
they degrade ecosystem services and, at the end of the chain, undermine their own
living conditions. The case of mangroves is one of the most telling: these
maritime ecosystems promote animal reproduction, store carbon and constitute
powerful natural barriers against tidal waves. By destroying them, human
communities are becoming poorer and weaker.

The start of the 2020
decade, the first three months of which were marked by huge fires in Australia
and the Covid-19 pandemic, is clearly showing that destroying Nature is beyond
our means. The most intuitive definition of the unsustainability of current
economic systems can therefore be summed up in just a few words: human
well-being destroys human well-being.

How do we get out of
this vicious spiral as quickly as possible? One common sense solution, known
since Malthus and constantly updated since then, is to suppress humanity, in
whole or in part. Some commentators are taking note of how much the Biosphere,
freed from the burden of humans, is doing better since they have been mostly
confined. If we turn off the source of human greenhouse gas emissions, it is of
course likely that they will fall sharply. Likewise, if the sources of local
pollution in urban spaces, for example in Paris, are turned off, the air there will be restored to a remarkable quality. It is also likely that we will see an improvement
in the lot of animal and plant species during this period, much as in areas like
the Chernobyl region that humans were forced to abandon. But what good is clean air when we are deprived
of the right to breathe it for more than a few moments a day?

In reality, even if
confinement has led to a constrained and temporary sobriety, its long-term
impact is working fully against the ecological transition. All the mechanisms
of social cooperation that are essential to transition policies are now at a
standstill, except for market transactions. To take simply the example of
climate policy, the very strategic COP 26 gathering has already been postponed
to 2021, the next IPCC Assessment Report has been slowed down, the full, comprehensive outcome of the efforts of
the Citizen climate convention has been compromised, and so on. And a heat wave under lockdown cannot be excluded!

The point is that it
is not a matter of neutralizing or even freezing social systems to
“save” natural systems, but of working over the long-term on their social-ecological articulation, which is still a blind spot in contemporary
economic analysis.

The fact remains that
the current social emergency is forcing governments around the world to work
here and now to protect their populations, particularly the most vulnerable,
from the colossal shock that is simultaneously hitting economic systems around
the world. The notion of essential well-being can rightly serve as a compass guiding
these efforts, which could focus on sectors vital to the whole population in
the months and years to come, subject to the imperative of not further
accelerating the ecological crisis. Essential well-being and non-harmful
well-being could converge to meet the present urgency and the needs of the
future. How, precisely?

Let us briefly return
to the different dimensions of essential well-being outlined in the first post
in this series. Public health and the care sector are clearly at the centre of
essential well-being, understood as human well-being which works for its
perpetuation rather than for its loss. The medical journal The Lancet
has highlighted in recent years the increasingly tangible links between health and
climate, health and various pollutants, health and biodiversity, and health and
ecosystems. Care for ecosystems and care for humanity are two sides of the same
coin. But the issue of environmental health must be fully integrated, including
here in France, with the new priority on health. Investing in public services
beyond the health system is also a guarantee that essential well-being is shared
most equitably.

This temporal coherence
is complicated by the necessary reinvestment in essential infrastructure. Food
supply systems in France and beyond, from agricultural production to retail
distribution, are today far too polluting and destructive to both human health
and ecosystems. Food systems already engaged in the ecological transition
should be given priority in order to promote their generalization. Likewise,
the energy required for infrastructure, particularly urban infrastructure
(water, electricity, waste, mobility, etc.) is still largely fossil-fuelled,
even though in just five years a global metropolis like Copenhagen has given
itself the means to obtain supplies from 100% renewable energy. We must
therefore accelerate the move for energy and carbon sobriety – we have all the means needed.
Finally, the issue of the growing ecological footprint of digital networks can
no longer be avoided, when essential infrastructures, such as heating networks and
waste collection, work very well in a “low-tech” mode.

The notion of
essential well-being can therefore be useful for the “end of the
crisis”, provided that we remain faithful to the motto of those to whom we
owe so much: first, do no harm.




The essential, the useless and the harmful (part 2)

By Eloi Laurent

How do we know what
we can do without while continuing to live well? To clarify this sensitive
issue, economic analysis offers a central criterion, that of the useful, which
itself refers to two related notions: use and utility.



First of all, and
faithfully to the etymology, what is useful is what actually serves people to
meet their needs. From the human point of view, then, something is useless that
doesn’t serve to meet people’s needs. Amazon announced on March 17 that its warehouses would now store only “essential
goods” until April 5, and defined these as follows in the context of the
Covid-19 crisis: “household staples, medical supplies and other high-demand
products”. The ambiguity of the criterion for the useful is tangible in this
definition, which conflates something of primary necessity and something that
emerges from the interplay of supply and demand. While giving the appearance of
civic behaviour, Amazon is also resolutely in line with a commercial
perspective.

Furthermore, this
first criterion of the useful leads into the oceanic variety of human
preferences that punctuate market movements. As Aristotle recalls in the first
chapter of the Nicomachean ethics,
the founding text of the economics of happiness written almost two and a half
millennia ago, we find among individuals and groups a multiplicity of
conceptions of what constitutes a good life. But contrary to the thoughts of Aristotle,
who erected his own concept of happiness as well-being that is superior to
others, it is not legitimate to prioritize the different conceptions of a happy
life. Rather, a political regime based on liberty is about ensuring the
possibility that the greatest number of “pursuits of happiness” are conceivable
and attainable so long as none of them harms others.

But the Aristotelian
conception of happiness, which emphasizes study and the culture of books, is no
less worthy than any other. Are bookstores, as professionals in the sector
argued at the start of the lockdown in France, essential businesses just like earthly
food businesses? For some, yes. Can they be considered useless at a time when
human existence is forced to retreat to its vital functions? Obviously not.

Hence the importance
of the second criterion, that of utility, which not only measures the use of
different goods and services but the satisfaction that individuals derive from
them. But this criterion turns out to be even more problematic than that of use
from the point of view of public policy.

Classical analysis,
as founded for example by John Stuart Mill following on from Jeremy Bentham,
supposes a social welfare function, aggregating all individual utilities, which
it is up to the public authorities to maximize in the name of collective
efficiency, understood here as the optimization of the sum of all utilities. Being
socially useful means maximizing the common well-being thus defined. But, as we
know, from the beginning of the 20th century, neoclassical analysis called into
question the validity of comparisons of interpersonal utility, favouring the
ordinal over the cardinal and rendering the measure of collective utility
largely ineffective, since, in the words of Lionel Robbins (1938), “every
spirit is impenetrable for every other, and no common denominator of feelings
is possible”.

This difficulty with
comparison, which necessitates the recourse to ethical judgment criteria to
aggregate preferences, in particular greatly weakens the use of the statistical
value of a human life (“value of statistical life”, or VSL) in efforts to base
collective choices on a cost-benefit monetary analysis, for example in the area
of environmental policy. Do we imagine that we could decently assess the “human
cost” of the Covid-19 crisis for the different countries affected by crossing the VSL values calculated, for example by the OECD,
with the mortality data compiled by John Hopkins University? The economic analysis of environmental issues
cannot in reality be limited to the criterion of efficiency, which is itself
based on that of utility, and must be able to be informed by considerations of justice.

Another substantial
problem with the utilitarian approach is its treatment of natural resources,
reources that have never been as greatly consumed by economic systems as they are today – far from the promise of the
dematerialization of the digital transition underway for at least the last
three decades.

The economic analysis
of natural resources provides of course various criteria that allow us to
understand the plurality of values ​​of natural resources. But when it comes to
decision-making, it is the instrumental value ​​of these resources that prevails, because these are
both more immediate in terms of human satisfaction and easier to calculate.
This myopia leads to monumental errors in economic choices.

This is particularly
the case for the trade in live animals in China, which was at the root of the
Covid-19 health crisis. The economic utility of the bat or the pangolin can
certainly be assessed through the prism of food consumption alone. But it turns
out both that bats serve as storehouses of coronavirus and that pangolins can
act as intermediary hosts between bats and humans. So the disutility of the
consumption of these animals (measured by the economic consequences of global
or regional pandemics caused by coronaviruses) is infinitely greater than the
utility provided by their ingestion. It is ironic that the bat is precisely the
animal chosen by Thomas Nagel in a classic article from 1974 aimed at tracing the human-animal border, which
wondered what the effect was, from the point of view of the bat, of being a
bat.

Finally, there
appears, halfway between the useless and the harmful, a criterion other than
the useful: that of “artificial” human needs, recently highlighted by
the sociologist Razmig Keucheyan.
Artificial is understood here in the dual sense that these needs are created
from scratch (especially by the digital industry) rather than spontaneously,
and that they lead to the destruction of the natural world. They contrast with collectively
defined “authentic” needs, with a concern for preserving the human
habitat.

At the end of this
brief exploration, while it may seem rather difficult to determine the question
of useful (and useless) well-being, it nevertheless seems… essential to
better understand the issue of harmful well-being. This will be the subject of
the last post in this series.




The essential, the useless and the harmful (part 1)

Éloi Laurent

The Covid-19 crisis
is still in its infancy, but it seems difficult to imagine that it will lead to
a “return to normal” economically. In fact, confinement-fuelled reflections
are already multiplying about the new world that could emerge from the
unprecedented conjunction of a global pandemic, the freezing of half of
humanity, and the brutal drying up of global flows and the economic activity.
Among these reflections, many of which were initiated well before this crisis,
the need to define what is really essential to human well-being stands out:
what do we really need? What can we actually do without?



Let us first reason
by the absurd, as Saint-Simon invited us to do back in 1819. “Suppose that
France suddenly loses … the essential French producers, those who are
responsible for the most important products, those who direct the works most useful
to the nation and who render the sciences, the fine arts and the crafts
fruitful, they are really the flower of French society, they are of all the
French the most useful to their country, those who procure the most glory, who add
most to its civilization and its prosperity: the nation would become a lifeless
corpse as it lost them… It would require at least a generation for France to
repair this misfortune…”. It is in the mode of the parable that Saint-Simon
thus tried to explain the hierarchical reversal that the new world of the
industrial revolution implied for the country’s prosperity, which could
henceforth do without the monarchical classes, in his view, whereas
“Science and the arts and crafts” had become essential.

Adapting Saint-Simon’s
parable to the current situation amounts to recognizing that we cannot do
without those who provide the care, guarantee the food supply, maintain the
rule of law and the supply of public services in times of crisis, and operate
the infrastructure (water, electricity, digital networks). This implies that in
normal times all these professions must be valued in line with their vital
importance. The resulting definition of human well-being resembles the
dashboard formed by putting together the different boxes in the pandemic travel certificates that every French person must fill out in order to
be able to move out of their confinement.

But it is possible to
flesh out this basic reflection by using the numerous studies carried out over
the decades on the measurement of human well-being, work which has greatly accelerated in the last
ten years in the wake of the “great recession”. We can start by
considering what is essential in the eyes of those questioned about the sources
of their well-being. Two priorities have emerged: health and social connections. In this respect, the current situation offers a
striking “well-being paradox”: drastic measures of confinement are sometimes
being taken to preserve health, but they in turn lead to the deterioration of
social connections due to the imposed isolation.

But how better to
begin to positively identify the different factors in “essential
well-being” that should now be the focus of public policy? Measuring
poverty can help here in measuring wealth. The pioneering empirical work of
Amartya Sen and Mahbub ul Haq in the late 1980s resulted in a definition of
human development that the Human Development Indicator, first published by the United Nations in 1990, reflects only in part: “Human development is a
process of enlarging people’s choices. The most critical of these wide-ranging
choices are to live a long and healthy life, to be educated and to have access
to resources needed for a decent standard of living. Additional choices include
political freedom, guaranteed human rights and personal self-respect.”
More specifically, in the French case, the work undertaken in 2015 by the
National Observatory of Poverty and Social Exclusion (Onpes) on reference budgets, and extended in
particular by INSEE with its “indicator of
poverty in living conditions
“, has led to defining the essential
components of an “acceptable” life (we could also speak of “decency”).

But let’s suppose
that these measurement instruments contribute, upon recovery from the crisis,
to defining an essential well-being (which key workers would maintain in the crisis
situations that are sure to be repeated under the impact of ecological shocks);
expertise alone would not be enough to trace its contours. A citizens’
convention needs to take up the matter.

This is all the more
so as the definition of essential well-being naturally evokes two other
categories that are even more difficult to define, to which this blog will
return in the coming days: useless (or artificial) well-being, that which can
be dispensed with harmlessly; and harmful well-being, which we must do without
in the future because in addition to being ancillary it harms essential well-being,
in particular because it undermines the foundations for well-being by leading
to the worsening of ecosystems (this is the debate taking place in Europe on whether
it is necessary to save the airlines). The debate over essential well-being has
just begun…




The transmission of monetary policy: The constraints on real estate loans are significant!

By Fergus Cumming (Bank
of England) and Paul Hubert (Sciences Po – OFCE, France)

Does the transmission
of monetary policy depend on the state of consumers’ debt? In this post, we
show that changes in interest rates have a greater impact when a large share of
households face financial constraints, i.e. when households are close to their
borrowing limits. We also find that the overall impact of monetary policy
depends in part on the dynamics of real estate prices and may not be
symmetrical for increases and decreases in interest rates.



From
the micro to the macro

In a recent
article
, we use home loan
data from the United Kingdom to build a detailed measure of the proportion of
households that are close to their borrowing limits based on the ratio of mortgage
levels to incomes. This mortgage data allows us to obtain a clear picture of the
various factors that motivated people’s decisions about real estate loans
between 2005 and 2017. After eliminating effects due to regulation, bank
behaviour, geography and other macroeconomic developments, we estimate the
relative share of highly indebted households to build a measure that can be
compared over time. To do this, we combine the information gathered for 11
million mortgages into a single time series, thus allowing us to explore the
issue of the transmission of monetary policy.

We use the time
variation in this debt variable to explore whether and how the effects of
monetary policy depend on the share of people who are financially constrained. We
focus on the response of consumption in particular. Intuitively, we know that a
restrictive monetary policy leads to a decline in consumption in the short to
medium term, which is why central banks raise interest rates when the economy
is overheating. The point is to understand whether this result changes
according to the share of households that are financially constrained.

Monetary
policy contingent on credit constraints

We find that monetary
policy is more effective when a large portion of households have taken on high levels
of debt. In the graph below, we show how the consumption of non-durable goods, durable
goods and total goods responds to raising the key interest rate by one
percentage point. The grey bands (or blue, respectively) represent the response
of consumption when there is a large (small) proportion of people close to
their borrowing limits. The differences between the blue and grey bands suggest
that monetary policy has greater strength when the share of heavily indebted households
is high.

It is likely that there are at least two mechanisms behind this differentiated effect: first, in an economy where the rates are partly variable[1], when the amount borrowed by households increases relative to their income, the mechanical effect of monetary policy on disposable income is amplified. People with large loans are penalized by the increase in their monthly loan payments in the event of a rate hike, which reduces their purchasing power and thus their consumption! As a result, the greater the share of heavily indebted agents, the greater the aggregate impact on consumption. Second, households close to their borrowing limits are likely to spend a greater proportion of their income (they have a higher marginal propensity to consume). Put another way, the greater the portion of your income you have to spend on paying down your debt, the more your consumption depends on your income. The change in income related to monetary policy will then have a greater impact on your consumption. Interestingly, we find that our results are due more to the distribution of highly indebted households than to an overall increase in borrowing.

Our results also
indicate some asymmetry in the transmission of monetary policy. When the share
of constrained households is large, interest rate increases have a greater
impact (in absolute terms) than interest rate cuts. This is not completely surprising.
When your income comes very close to your spending, running out of money is very
different from receiving a small additional windfall.

Our results also
suggest that changes in real estate prices have significant effects. When house
prices rise, homeowners feel richer and are able to refinance their loans more
easily in order to free up funds for other spending. This may offset some of
the amortization effects of an interest rate rise. On the other hand, when
house prices fall, an interest rate hike exacerbates the contractionary impact on
the economy, rendering monetary policy very powerful.

Implications
for economic policy

We show that the state
of consumers’ debt may account for some of the change in the effectiveness of
monetary policy during the economic cycle. However, it should be kept in mind
that macro-prudential policy makers can influence the distribution of debt in
the economy. Our results thus suggest that there is a strong interaction
between monetary policy and macro-prudential policy.


[1]
Which is the case in the United Kingdom.




Are our inequality indicators biased?

By Guillaume
Allègre

The issue of
inequality is once again at the heart of economists’ concerns. Trends in
inequality and its causes and consequences are being amply discussed and debated.
Strangely, there seems to be a relative consensus about how to measure it [1]. Economists working on inequality use in
turn the Gini index of disposable income, the share of income held by the
richest 10%, the inter-decile ratio, and so on. All these measures are relative
in character: If the income of the population as a whole is multiplied by 10,
the indicator doesn’t change. What counts is the income ratio between the
better off and the less well off. But could inequality
and the way it changes be measured differently?



France’s inequality
monitoring body
is currently discussing not only trends in the
income ratio between the more and less well-off, but also changes in the income
gap: “In one year, the richest 10% receive on average about 57,000 euros, and
the poorest 10% 8,400 euros: a difference of 48,800 euros, equivalent to just
over 3.5 years of work paid at the minimum wage (Smic). This gap rose from 38,000 euros in 1996 to 53,000 euros in
2011, then fell to 48,800 euros in 2017.” Measuring changes in the income
gap does not seem relevant. Let’s take two people with incomes of 500 and 1,000
euros, then multiply their incomes by 10: the income ratio is stable, but the
income gap is multiplied by 10. Has inequality increased, is it stable or has
it decreased? Using the income gap as a measure, it has increased, but it is
stable according to the ratio. We believe it may have actually
decreased.

Indeed, in France
today, the differences in living conditions, lifestyles and well-being are perhaps
greater between someone with an income of 500 euros, which leaves them in dire poverty,
and someone with an income of 1,000 euros, which puts them at the poverty line,
than between a person with an income of 5,000 euros, who can be described as
well-off, and a person earning 10,000 euros, who can be described as very
well-off. These last two people share similar lifestyles, even if the latter probably
lives in a slightly larger and better-situated home, and frequents more
luxurious restaurants. In other words, subtracting 10% of income from a very
wealthy person probably has less impact than subtracting 10% from someone at the
poverty line. There is abundant literature on risk aversion showing that people
are willing to pay more than 10% of their income when it is high to protect
against a 10% drop in income when it is low. This is, moreover, one of the justifications for a progressive
tax: a greater percentage is taken from the better off, but the sacrifice is
supposed to be equal because, according to marginalist theory, contributive
capacity grows faster than income (or utility increases less than
proportionately compared to income).

If this argument
is accepted, we could conclude that at a constant level of relative inequality
(Gini index, income ratio between the richest and poorest), all other things being equal, a richer
society would in practice be more egalitarian, in the sense that its citizens share
a more comparable way of life or well-being. Intuition tells us that this is
true for large gaps in wealth (such as the 10-fold increase in earnings in the example
above). If this is true, then comparisons of relative inequality made over very
long periods of time or between developed and developing countries need to be kept
in perspective. When Thomas Piketty
shows that the richest 10% captured 50% of income between 1780 and 1910, we
could then conclude that inequality has decreased over that period!

Milanovic and Milanovic, Lindert
and Williamson

have developed concepts that take into account this wealth effect over a very
long-term historical perspective: the “inequality frontier” is the maximum
inequality possible in a society taking into account the fact that the society
must guarantee the livelihoods of its poorest members (the minimum income to
live): in an economy with very little surplus (where the average discretionary income
is low), the maximum possible inequality will be low [2]; in a very well-off economy, the maximum possible
Gini coefficient will be close to 100 percent [3]. The “extraction ratio” is the current
Gini divided by the maximum possible Gini. The wealthier a country is, the lower
the maximum possible Gini coefficient, and the more – at equal Ginis – the
extraction ratio will be low. One could also calculate a “discretionary income
Gini” (in the sense of disposable income minus the minimum subsistence
income) [4].

It can be argued that
when comparing inequality in two societies at different levels of development,
the extraction ratio is a better indicator of inequality than the available
income Gini [5] or other indicators of relative inequality.
One conclusion reached by Milanovic et al.: “Thus, although inequality in historic
preindustrial societies is equivalent
to that of industrial societies today, ancient inequality was much larger when
expressed in terms of maximum feasible inequality. Compared to the maximum feasible
inequality, current inequality is much lower than that in ancient societies”.
According to the authors, in the early 2000s, the maximum possible Gini was
55.7 in Nigeria and 98.2 in the US: the comparison of inequality between the
two countries will then be very different depending on whether the indicator
chosen is the income Gini or the extraction ratio. On the other hand, there
will be little difference between the United States and Sweden (maximum
achievable Gini of 97.3) despite an average income difference of 45%. The
effect is in fact saturated since the Swedish income is already 40 times the
subsistence minimum (400 dollars per year in purchasing power parity) and the
American, 58 times. In the authors’ approach, the subsistence minimum is set in
purchasing power parity and is fixed between countries and over time. But is
the subsistence minimum really 400 dollars a year in Sweden today? When
comparing inequality in the United States and Sweden today, is this subsistence
minimum relevant? Taking a significantly higher minimum level of subsistence
could change the comparison of inequality, even in developed countries (for a comparable
living standards Gini, is Switzerland really more egalitarian than France?).
The problem then is to establish a minimum subsistence income amount [6].

The choice of an
inequality indicator depends on the objective pursued. If the idea is to
compare inequalities in living conditions across time or between countries, the
discretionary income Gini might be relevant. On the other hand, if there is concern
that excessively high incomes present a danger for democracy (a position
developed in particular by Stiglitz in The Price
of Inequality
), the measure of relative inequality as calculated by
the share of income captured by the wealthiest 1% seems more relevant.

When comparing countries
that are closely related in terms of development, there are other, perhaps more
important, limitations to comparing living standard Ginis. Given the same
income inequality, a country where public spending on health, housing, education,
culture, etc. is higher will (probably) be more egalitarian (unless public
spending goes disproportionally to the better off). The issue of housing is
also important, as it weighs heavily in household budgets: all other things being
equal, high rents due to a constrained housing supply will increase inequality
(tenants are poorer on average today). But it is difficult to take into account
this effect in comparisons or trends, because the price of housing may reflect an
improvement in quality or better amenities. In addition, inequality between
landlords and tenants is not taken into account in the usual calculation of the
standard of living: with equal income, an owner who has finished repaying the
mortgage is better off than a tenant, but the fictitious rent that the owner receives
does not enter the calculation of their standard of living. Finally, and
without being exhaustive, the issue of hours of work and household production
also complicates the equation: a difference in income can be linked to a
difference in working hours, especially if one of the spouses in a couple (most
often the woman) is inactive or works part-time. However, the inactive spouse
can engage in household production (including childcare) that is not taken into
account in statistics: the difference in standard of living with the bi-active
couple is less than what is implied by the difference in incomes. Statistics do
not usually take this effect into account because it is difficult to assign a
value to household production.

It can be seen that
the measurement of income and the standard of living, and therefore inequality,
is imperfect. The wealth effect (at an equal standard of living Gini, a richer
society is probably more egalitarian, all things being equal) is a limit, among
others, some of which are probably more important when comparing developed
economies. On the other hand, this wealth effect could be relatively significant
if one wants to compare inequalities in living conditions between the France of
1780 and that of 1910 and a fortiori of today.


[1] Whereas it was prominent from the early 1970s to the end of the
1990s: see in particular the work of Atkinson, Bourguignon, Fleurbaey and Sen.

[2] Milanovic et al.
give the following example: consider a society of 100 individuals, 99 of whom are
in the lower class. The subsistence minimum in this society is 10 units and the
total income 1,050 units. The sole member of the upper class receives 60 units.
The Gini coefficient associated with this distribution (the maximum possible Gini)
is only 4.7 percent.

[3] In fact, the
maximum possible Gini rises quickly: if in the previous country, the income
increases to 2,000 units and the dictator extracts all the surplus (1,010
units), the Gini leaps to 49.5.

[4] The disposable
income Gini, or the extraction ratio, shares some of the characteristics of the
Atkinson
index
, including the idea of differentiating among the wealthiest
and the poorest. Nevertheless, the Atkinson index remains a relative indicator
of inequality: if all incomes are multiplied by 10, the indicator remains
constant. The index satisfies average independence, which is generally sought
among inequality indicators, but which we seek to go beyond here.

[5] The two indicators
do not measure the same concepts. First, it may be interesting to use several
indicators, but multiplying the number of indicators raises the problem of
readability, so one must choose. The choice of an indicator is based on a
normative judgment since, at least implicitly, the idea is to reduce inequality
according to the measure chosen (there is a consensus among economists that,
all else being equal, less inequality is preferable).

[6] Especially since
this income must be consistent over time or between countries if the objective
is to capture a trend or make a comparison.