He who sows austerity reaps recession

By the Department of Analysis and Forecasting, headed by X. Timbeau

This article summarizes OFCE note no.16 that gives the outlook on the global economy for 2012-2013.

The sovereign debt crisis has passed its peak. Greece’s public debt has been restructured and, at the cost of a default, will fall from 160% of GDP to 120%. This restructuring has permitted the release of financial support from the Troika to Greece, which for the time being solves the problem of financing the renewal of the country’s public debt. The contagion that hit most euro zone countries, and which was reflected in higher sovereign rates, has been stopped. Tension has eased considerably since the beginning of 2012, and the risk that the euro zone will break up has been greatly reduced, at least in the short term. Nevertheless, the process of the Great Recession that began in 2008 being transformed into a very Great Recession has not been interrupted by the temporary relief of the Greek crisis.
First, the global economy, and especially the euro zone, remains a high-risk zone where a systemic crisis is looming once again. Second, the strategy adopted by Europe, namely the rapid reduction of public debt (which involves cutting public deficits and maintaining them below the level needed to stabilize debt), is jeopardizing the stated objective. However, since the credibility of this strategy is perceived, rightly or wrongly, as a necessary step in the euro zone to reassure the financial markets and make it possible to finance the public debt at acceptable rates (between 10% and 20% of this debt is refinanced each year), the difficulty of reaching the goal is demanding ever greater rigor. The euro zone seems to be pursuing a strategy for which it does not hold the reins, which can only fuel speculation and uncertainty.
Our forecast for the euro zone points to a recession of 0.4 percentage point in 2012 and growth of 0.3 point in 2013 (Table 1). GDP per capita in the euro zone should decline in 2012 and stabilize in 2013. The UK will escape recession in 2012, but in 2012 and 2013 annual GDP growth will remain below 1%. In the US, GDP growth will accelerate from 1.7% per year in 2011 to 2.3% in 2012. Although this growth rate is higher than in the euro zone, it is barely enough to trigger an increase in GDP per capita and will not lead to any significant fall in unemployment.
The epicenter of the crisis is thus shifting to the Old Continent and undermining the recovery in the developed countries. The United States and United Kingdom, which are faced even more than the euro zone with deteriorating fiscal positions, and thus mounting debt, are worried about the sustainability of their public debts. But because growth is just as important for the stability of the debt, the budget cuts in the euro zone that are weighing on their activity are only adding to difficulties of the US and UK.
By emphasizing the rapid reduction of deficits and public debt, euro zone policymakers are showing that they are anticipating a worst case scenario for the future. Relying on so-called market discipline to rein in countries whose public finances have deteriorated only aggravates the problem of sustainability by pushing interest rates up. Through the interplay of the fiscal multiplier, which is always underestimated in the development of strategies and forecasts, fiscal adjustment policies are leading to a reduction in activity, which validates the resignation to a worse “new normal”. Ultimately, this is simply a self-fulfilling process.

 




A carbon tax at Europe’s borders: Fasten your seat belts!

By Éloi Laurent and Jacques Le Cacheux

How can the current deadlock in international climate negotiations be resolved? By an optimal mix of incentives and constraints. In the case that currently opposes the European Union and the international air carriers, the EU is legitimately bringing this winning combination to bear by imposing what amounts to a carbon tax on its borders. It is brandishing a constraint, the threat of financial penalties, to encourage an industry-wide agreement that is long overdue among the airlines to reduce their greenhouse gas (GHG) emissions.

The ongoing face-off with the carriers of several major countries, which, with the more or less open support of their governments, are contesting the application of these new regulations on GHG emissions from planes flying into or out of the EU is, from this perspective, a crucial test. It is an issue with considerable symbolic value, as it represents a first: all the airlines serving airports in the EU are subject to the new measure, regardless of their nationality. On March 9th, European officials reaffirmed their determination to maintain this regulation, so long as a satisfactory solution has not been proposed by the International Civil Aviation Organization (ICAO). However, 26 of the 36 member states of the ICAO Board, including China, the United States and Russia, have expressed their opposition to the new European requirement, advising their airlines not to comply. And the Chinese government is now threatening to block or outright cancel orders for 45 Airbus aircraft, including 10 A380 super-jumbos, if the European measure is not repealed.

Air emissions up sharply

GHG emissions attributable to air transport account for only about 3% of global and European emissions (about 12% of total emissions from transport in the EU). But despite the progress made by aircraft manufacturers in energy intensity, these emissions, which are still modest compared to road transport, have been experiencing explosive growth over the last 20 years, and are rising much faster than those in all other sectors, including shipping (see chart). They must be controlled.

In addition, in most countries, in particular in the EU, airline fuel is not subject to the usual taxation applied to oil products, which obviously distorts competition with other modes of transport.

A robust legal framework

The new European regulations, which took effect on 1 January 2012, require all airlines serving any EU airport to acquire emission permits in an amount corresponding to 15% of the CO2 emissions generated by each trip to or from that airport. The measure is non-discriminatory, since it affects all airlines flying into or out of European air space, whatever their nationality or legal residence. This requirement, which is grounded in environmental protection, is therefore fully consistent with the Charter of the World Trade Organization (WTO).

The measure is also of course in compliance with European treaties as well as with the various provisions of international law in the field of civil aviation, as is reiterated in the judgment of 21 December 2011 by the Court of Justice of the European Union, in a case brought by several US carriers challenging its legality. The legal framework for this new provision is thus robust.

Towards the death of air transportation?

The airlines and the governments of the countries that are major emitters of greenhouse gases and that are hostile to this measure justify their outright opposition by arguing its poor timing, given the current economic climate of low growth and rising fuel costs, and its excessive cost, i.e. that the resulting rise in passenger air fares would be likely to further depress an already fragile industry.

In reality, the measure is largely symbolic and the cost is almost insignificant. Judge for yourself: according to the Air France calculator approved by the French environmental agency, the ADEME, emissions per passenger amount to just over one tonne of CO2 for a Paris-New York return trip, and approximately 1.4 tonnes for Paris-Beijing. The current price of a tonne of carbon on the European carbon market on which companies must buy emissions permits, the ETS, is just under 8 euros. The additional cost per ticket thus amounts, respectively to 2 euros for Paris-New York and 1.7 euros for Paris-Beijing! (estimates using the ICAO calculator are even lower).

Towards a trade war?

Given the current state of the legislation, the threats to cancel Airbus orders or similar retaliatory trade measures are obviously out of proportion to the economic impact of the tax on the European skies. To fear that this might trigger a “trade war” is also to forget that such a war has already been declared in industry, particularly in the aviation sector (with the multiplication of more or less disguised subsidies, including in Europe, and with the use of exchange rates as a veritable weapon of industrial policy). Furthermore, agreements or cancellations of orders in this sector are in any case very often influenced by the political context, sometimes for dubious reasons (as in the case of diplomatic reconciliation with relatively distasteful regimes). In this case the cause, the defence of the integrity of Europe’s climate policy, is legitimate.

The various threats and blackmail attempts being taken up by the pressure groups targeted, in this case air passengers, are intended to sway governments for obtaining short-sighted gains. They are targeting particular countries, foremost among them Germany and Poland, which are currently dragging their feet in accepting the EU Commission’s proposal to accelerate the pace of European emissions reduction by raising the goal of emissions reduction for 2020 from 20% to 30% (compared to 1990 levels). As is their right, on the climate issue Germany and Poland have been following an approach that is in accordance, respectively, with a growth strategy based on exports and an energy strategy based on coal. In both cases, these are national decisions that should not take precedence over the European approach. From the perspective of Europe’s interests, there is therefore no valid reason to yield to these pressures even if some member states become involved.

By confirming its determination, the EU can provide proof that leadership by example on the climate can go beyond simply setting a moral example and lead to actual changes in economic behaviour. The EU can ensure that everyone sees that, despite the impasse at the global level, a regional climate strategy can still be effective. If its approach is confirmed, the success of the European strategy, which consists of encouraging cooperative strategies under the threat of credible sanctions, would point towards a way to break the deadlock on climate negotiations.

The European Union will, in the coming weeks, be passing through a zone of turbulence (yet another) on the issue of its border carbon tax. It would be legally absurd and politically very costly to make a U-turn now: instead, let’s fasten our seat belts and wait calmly for the stop light to change.

 

 




Economic policy-making tools for pre- and post-crisis periods

by Zakaria Babutsidze and Mauro Napoletano

The worldwide financial crisis has questioned the relevance of economic models that are currently used by central bankers and macro analysts. In contrast, the recent economic events seem to be better described by models featuring boundedly rational heterogeneous agents and wherein markets do not necessarily clear at all times. Agent Based Models (ABMs) are a new class of models that embed all the above features, and therefore qualify as a promising alternative to conventional models.

An economic crisis, such as the current one, is a clear divide between processes before and after it. For instance, economic policies can be split into two groups: pre-crises and post-crisis policies. While the latter aim at helping the economy to move out of the crises to a more favourable state, the former policies concentrate on averting it.

Currently popular economic models can (to an extent) discuss post crisis policies. These models view economies as closed systems that move along one of (few) balanced equilibria. A modeller can introduce a large external shock in the system that can be interpreted as the crisis and further discuss policies to help the system move back to the previous (or even better) equilibrium. However, there is a problem with these policies. The main assumption of modern mainstream economics is hyper-rational agents, which assumes that economic agents (including households) possess complete information about the future of the economy and by acting rationally on this information the future that was foreseen is actually realized.

Modellers argue that this is reasonable even if we know that people do not optimize. The argument is that due to market selection only the best performing agents will survive. As optimization guarantees the best response to the current situation every agent that is present at the equilibrium has to be behaving “as if” she is optimizing. Notice that this argument rests on the notion of equilibrium and says nothing about how this equilibrium will be reached. Now recall that modellers had to assume a large shock knocking the system out of the equilibrium in order to discuss the crisis. Then the approximation with hyper-rationality cannot properly describe the agent behaviour after crisis.

Concerning pre-crises policies the problems are even greater. Current mainstream models exclude the possibility of generating the crises endogenously. While, it is a known fact that modern economic crises are rarely related to external shocks. They are generated endogenously by the system. They emerge from the factors (like non-price interactions, localized learning processes, outrageous banking and investment practices etc.) that are directly assumed away from the mainstream modelling. Therefore, these models are inherently inadequate to discuss policies directed to prevention of crises.

We believe that an economic tool that is to be successful in designing economic policy to avert the economic crises requires three characteristics. Firstly, it has to take account of the individual behaviour. Secondly, it has to model the behaviour in a way that is consistent not only with equilibrium, but also with non-equilibrium states. Finally, it has to allow for the possibility of endogenously generating crises.

Currently popular policy making tools fail in at least one of these three respects. Take for example Dynamic Stochastic General Equilibrium (DSGE) models. They represent the workhorse of modern monetary policy. This modelling strategy conforms to the first requirement listed above: DSGE is a micro-founded modelling strategy that replaced previous techniques that were abstracting from individual agent behaviour and thus were prone to Lucas (1976) critique.[1]

Alas, DSGE fails in two other respects. Microeconomic behaviour is based on perfect foresight that requires hyper-rational agents that were mentioned above, and therefore, as argued above, does not describe well agent behaviour during the out-of-equilibrium dynamics. In addition to this, stochasticity of the system allows only for small perturbations and large shocks (such as crises) have to be exogenously injected in the system. Perhaps, these failures are the cause of difficulties that DSGE modelers are having in predicting and managing current crises, as acknowledged by some central bankers (Trichet, 2010; Kocherlakota 2010).

It is true that DSGE models take into account micro-behaviour as well as institutions (see for example Smets and Wouters 2003), which is the model widely used by European Central Bank). However, what they fail to take into account is the possibility of endogenous (co-)evolution of these structures, the heterogeneity and non-price interactions among economic agents that can lead the system to breakdown without external interference.

One promising tool for economic policy design goes under the name of Agent Based Modelling (ABM). The characteristics of this approach are discussed at greater length in a recent OFCE briefing paper by Napoletano, Gaffard and Babutsidze 2012. In contrast to mainstream economics (such as DSGE), ABM is more flexible to model relevant processes as dynamical systems of heterogeneous agents who interact through price and non-price channels. The approach treats time as the key variable. This is in contrast to orthodox models. Take the crises again. In mainstream modelling at the moment of crisis new equilibrium becomes known to everyone instantaneously and perfectly rational individuals adjust their choices accordingly. This drives the system to the new equilibrium. In ABM individuals do not get information about new equilibrium to which the system is supposed to converge to and each individual has to navigate in its own way. This feature allows for the plethora of learning processes (which, according to Howitt 2012 are extremely scarce in modern Macroeconomic theory) to be also taken on board.

ABM concentrates on open-ended dynamics and allows for an equilibrium (defined as an ergodic state of the system) as an emergent and optional outcome (Leijonhufvud 2011). While current mainstream modelling is based on the centralized information processing structure that is fed with all the available information in the system, ABM takes a bottom-up approach that starts modelling realistic micro-foundations (in contrast to DSGE) and analyses the resulting behaviour of the model at upper levels. The dynamics of aggregate variables are the result of complex, continuously (and endogenously) changing micro-structure. This yields substantial advantages in modelling policy on macro (LeBaron and Tesfatsion 2008), as well as on industry (Chang 2009) and market (Duffy and Unver 2008) levels.

Using Agent Based tools a modeller can specify the agent’s micro behaviour and understand how the dynamics of the system leads to the critical state and a subsequent breakdown (endogenously generated crisis). This is a common occurrence in physical systems and Agent Based approaches are routinely used for their analysis. Using such a model the policies to direct the path of the economy away from the critical state can be discussed. From this prospective ABM has clear advantage in discussing pre-crisis policies over orthodox approaches.

Another substantial advantage of the methodology is its easiness to be implemented in a computational environment. Behavioural rules can be passed to the agents in computer simulations and respective outcomes can be observed. This is important for two reasons. Firstly, this makes models easily understandable for policy-makers that are not necessarily proficient in mathematics that current orthodox methods heavily rely on (Uri Wilenski, the developer of the most popular computational environment for ABM – NetLogo, is repeatedly making this point). Secondly, behavioural rules (and other settings) can be easily adjusted to fit the problem at hand. Due to their concern with the equilibrium, mainstream models are less flexible and consequently less appropriate for policy-making.

However, there are disadvantages to the approach. Detailed discussion of approach’s shortcomings is presented in the above-mentioned OFCE briefing paper. Here we concentrate on the one that is shared by all non-equilibrium approaches. It is that ABM does not (cannot) provide a comprehensive analysis of all the paths the model allows for. Once you leave the equilibrium, the number of paths an economic system can take become infinite. Therefore, in most of the cases, comprehensive analysis is not feasible.

While this criticism is relevant in face of commonly accepted practice in economic science, it is irrelevant to the ABM’s powers as a policy-making tool. Policy makers are not concerned with all the possible scenarios in all the possible types of economies. They have a very specific problem at hand. They operate in a specific country/region, they are given a very specific initial condition (currently existent in the economy) and they want to achieve a certain well-defined goal with a specific policy tool. Agent Based Modelling gives them the opportunity to fine-tune the model to their specific situation and then analyse the effects of a specific policy instrument. The policy instrument controls one (or very few) parameters of the model. Given a specific market/economy and specific initial conditions exhaustive analysis of these policy tool can be performed and welfare improving (if not optimal) policy can be designed.

Merits of every modelling approach can be debated. But allowing diversity in approaches is bound to make policy discussions more stimulating and is likely to help the discipline avert the crises that are now seen as the crises of the discipline itself (Kirman 2010).

References

R. Lucas (1976) Econometric Policy Evaluation: A Critique. In K. Brunner and A. Meltzer (eds.) The Phillips Curve and Labor Market. Carnegie-Rochester Conference Series on Public Policy, 1:19–46.

J.-C. Trichet (2010) Reflections on the nature of monetary policy non-standard measures and finance theory. Opening address at the ECB Central Banking Conference.

N. Kocherlakota (2010) Modern Macroeconomic Models as Tools for Economic Policy. Banking and Policy Issues Magazine, Federal Reserve Bank of Minneapolis.

F. Smets and R. Wouters (2003) An Estimated Dynamic Stochastic General Equilibrium Model of the Euro Area. Journal of the European Economic Association, 1:1123-1175.

M. Napoletano, J-L. Gaffard and Z. Babutsidze (2012) Agent Based Models: A New Tool for Economic and Policy Analysis. OFCE briefing paper No3/March 15.

P. Howitt (2012) What the central bankers learned from modern macroeconomic theory? Journal of macroeconomics. 34:11-22.

A. Leijonhufvud (2011) Nature of the economy. CEPR Policy insight No. 53.

B. LeBaron and L. Tesfatsion (2008) Modeling macroeconomies as open-ended dynamics systems of interacting agents. American Economic Review: Papers & Proceedings, 98:246-250.

M. -H. Chang (2009) Industry Dynamics with Knowledge-Based Competition: A Computational Study of Entry and Exit Patterns. Journal of Economic Interaction and Coordination, 4:73-114.

J. Duffy and U. Unver (2008) Internet Auctions with Artificial Adaptive Agents: A Study on Market Design. Journal of Economic Behavior and Organization, 67:394-417.

A. Kirman (2010) The economic crisis is a crisis for economic theory. CESifo Economic Studies, 56:498-535.


[1] However, DSGE models downplay the possibility of multiple equilibria. Thus, their ability to  overcome the Lucas critique by introducing micro-foundations presents only a limited advantage.




Yes, the national accounts will be revised after the election

By Hervé Péléraux and Lionel Persyn[1]

In a Europe that is heading more and more clearly towards a recession, in mid-February the INSEE reported a 0.2% rise in France’s GDP. This fourth-quarter performance was surprising, as it contrasts sharply with the deterioration in the economic climate since summer 2011, which indicated that GDP growth would be less favourable than that announced.

The current figures from the national accounts are, however, not set in stone. A note from the OFCE describes the procedure since the release of the provisional results that marks the starting point in the process of revising the accounts. This revision is spread over several years, first involving the tuning of the quarterly accounts with the annual accounts, then the revision of the annual accounts (the final version for 2011 will be announced in May 2014). The final changes are to the database for the national accounts, which will provide an opportunity to introduce methodological innovations that aim at greater accuracy on past estimates.

The enigma of the fourth quarter of 2011 may be resolved in the future as the revisions are worked out. It is useful to refer to past experience to try to identify the profile of the coming adjustments and to draw the likely implications for the current period. Since 1987, the revisions to the accounts seem to have been pro-cyclical, that is to say, the preliminary figures are mostly revised upwards in periods of recovery or rapid growth, and downwards in periods of downswings in the economic cycle. In some major cyclical episodes, the average revisions are significant and could affect the economic diagnosis.

This was what happened in 2008. After the INSEE announced a negative result for the second quarter of -0.3%, the initial estimate for the third quarter was a positive 0.1%, which for a while put off the prospect that the French economy was entering a recession. The subsequent assessments gave a more dramatic turn to the GDP’s trajectory, with the current respective estimates for the two quarters being -0.7% and -0.3%. Had these been known at the time, this would probably have pushed forecasts downwards by fully revealing the severity of the impact of the financial crisis on the real economy.

 


[1] At the time this note was written, Lionel Persyn was an intern at OFCE and a doctoral candidate at the University of Nice at Sophia Antipolis.

 

 




Positions of French and German Banks in European interbank lending network

by Zakaria Babutsidze

Recent desperate cries for help from French and other European banks raise the question of exactly what type and how much trouble have they managed to get themselves into. The question can be approached from many angles. Here I try to gain insights into the topic by analyzing the cross-border interbank lending network. This is a network that facilitates the flow of much needed liquidity across the sovereign borders within the Eurozone. Due to high interconnectedness,  banks in each country affect (and are affected) directly or indirectly (by) the banks in all other countries. Banks of different countries play different roles in this vital network: some are net creditors, others are net debtors. In this post I take on the challenge of contrasting the behavior of the two largest creditors in the system (the banking sectors of France and Germany) who are often blamed for the recklessness in their lending practices.

Inspired by visualization of the network by The New York Times, I use the data on Consolidated Banking Statistics issued in December 2011 by the Bank for International Settlements. The data comprises the claims of banks in a given country filed vis-à-vis banks in other countries as of June 2011. Numbers do not include holdings of sovereign debt. The data is available only for 10 out of 17 Eurozone countries: France, Germany, Italy, Spain, The Netherlands, Austria, Ireland, Belgium, Portugal and Greece. As I am interested in the role of national financial systems in European network I cancelled out the counter-claims across the borders and proceeded with the volume of the net claims of one European country banking sector  vis-à-vis others.

The resulting network connects each of the 10 countries to the other nine. Each connection has a direction that reflects the current debt balance of a country’s banks vis-à-vis another country’s banks. I apply simple weighted network analysis to the data in order to dissect the European interbank lending network. The volume of mismatch between the claims vis-à-vis partners is used for weighting the links in the network. To make the methodology clearer consider a hypothetical example. Banks of country A owe 100 Euros to the banks of country B. At the same time, banks of country B owe 40 Euros to banks of country A. Then the mismatch between the countries amounts to 60 Euros which country A owes to country B. This way I determine the direction of each link in our network, or who is the creditor and who is the debtor. In addition to this, I take into account the value of the mismatch in the following way. If country C owes country D 30 Euros, we say that the link between A and B, which we have discussed earlier, is twice stronger than that between C and D.

A quick glance at the network visualization on Figure 1 is enough to notice the special role French and German banks are playing in the system. Banks in these two countries are the ones that are exposed the most to the problems in other European countries.

Recognizing that European cross-border interbank lending network is tightly embedded into global interbank lending network I augment the data with the three largest global players: The United Kingdom, The United States and Japan. In what follows I report two sets of results: one – for isolated European interbank lending network (that I call a closed network), the other – for the extended (open) network that includes three large international players. In the latter case, non-Eurozone countries are taken into account in the calculations but are excluded from the presented rankings.

There are a few important characteristics of the network that we can look at. I concentrate on country rankings with respect to statistics describing country’s banks’ access to interbank loans, their importance in facilitating interbank liquidity flow and their overall role as lender’s or receivers of the loans.

The measure that allows us to rank the countries in our network with respect to their access to loans is closeness centrality. This statistic measures the distance of the country’s banks to the banks of all the other countries in the network. Higher centrality implies shorter distance. This, in its turn, means that banks do not have to go far in search of financial resources. Panel A of Table 1 presents the ranking of the countries with respect to closeness centrality. When the European network is considered in isolation from the rest of the world it is Germany that has the easiest access to liquidity, while France does not appear in first half of the list. However, when European network is regarded as being embedded in global interbank lending network France tops the list leaving Germany at close second. This allows to conclude that French banks go mainly outside the Eurozone for borrowing money, while German banks balance their borrowings between European and non-European banks.

Panel B of Table 1 presents rankings with respect to betweenness centrality, which measures how much control do a country’s banks have over the liquidity flow through the network. This statistic calculates the frequency with which the country appears on the routes that money has to travel from every country to every other country. Higher centrality means that the banking system of the country lies on large number of routs between pairs of other countries. In this respect the closed European network is independent of influence of France and Germany. This points to the fact that banks in the system can reach each other without necessarily going through Germany or even France. The major brokers within the Eurozone seem to be the Dutch banks. Once extra-European links are considered French banks lead the board, while Germany does not appear in top five. France’s top seat in open network implies that it plays the role of a broker between European and non-European banks.

Next measure is the in-degree of the country in the weighted network. This statistic basically measures how important of a creditor a given country is for the other members of the network. Being largest creditors France and Germany swap the places as we move from closed to open network. From here we can conclude that Germany, although being larger creditor than France, has heavier non-European presence. This, clearly, is good for German banks in such turbulent times for Europe. In contrast French banks are more exposed to European risk.

Finally, Eigenvector centrality measures the importance of the country’s banks in the system more accurately. It takes into account not only creditor and debtor positions in the network but also the identity of the countries that a given country has ties with. According to this measure French banks play an absolutely central role in the network under discussion. Germany comes second once we discuss an open network. The difference between France and Germany is driven by the differences in their European/non-European credit ratio as well as by the differences in composition of European credit. The most notable difference is France’s extreme exposure to troubled Italy.

A broader view at Table 1 allows us to make an additional conclusion regarding the  behavior of French and German banking systems. From the table it is apparent that going from closed to open network (which adds American, British and Japanese banking systems to the picture) affects positions of France much more than those of Germany. This implies that German banks keep balance in their activity between European and non-European partners. They diversify their risk more efficiently. While French banks put all their eggs in one basket – Europe, which might not be the best strategy to pursue.

All in all, the present analysis shows that the prize for reckless lending goes rather to French than to German banks. They are central in the network by virtually any measure. In visualization in Figure 1 French credit, directly or indirectly, can reach all countries except Germany and Netherlands, while German credit only extends to four countries. And, importantly, that list of four does include Italy.




On the taxation of household income and capital

By Henri Sterdyniak

The idea is very widespread that in France unearned income benefits from an especially low level of taxation and that the French system could be made fairer by simply raising this level. In an OFCE Note, we compare the taxation on capital income with that on labour income, and show that most of it is taxed just as highly. The reforms adopted in 2012 further increase the taxation of capital income. So there is little room for manoeuvre. However, there are tax loopholes and a few exceptions, the most notable being the current non-taxation of imputed rent (which benefits households that own their own residence).

The table below compares the marginal tax rates for different types of income. The effective economic tax rates (including the “IS” corporate income tax, non-contributory social charges, the CSG wealth tax, social security taxes) are well above the posted rates. The interest, rental income, dividends and capital gains that are taxed are taxed at approximately the same level as the highest salaries. It is therefore wrong to claim that capital income is taxed at reduced rates. When it is actually taxed, this is at high levels.

The official tax rate on capital income increased from 29% in 2008 to 31.3% in 2011 due to a 1.1 percentage point increase in payroll taxes to finance the RSA benefit, a 1 point increase in withholding tax and a 0.2 point increase to fund pensions. The government has financed the expansion of social policy by taxing capital income. This rate will increase to 39.5% (for interest) and to 36.5% for dividends on 2012 income.

Should we advocate a radical reform: submission of all capital income to the tax schedule on personal income? This might be justified for the public image (to show clearly that all income is taxed similarly), but not on purely economic grounds.

With respect to interest income, this would mean ignoring the inflation rate. The 41% bracket would correspond to a levy of 108% on the real income of an investment remunerated at 4% with an inflation rate of 2%. For dividends, one must not forget that the income in question has already paid the “IS” tax; the 41% bracket (by eliminating the 40% allowance) would correspond to a total tax of 70%. We must make a policy choice between two principles: a single economic tax rate for all income (which paradoxically would lead to preserving a special tax on capital income) or higher taxation on capital income, since this goes mostly to the better-off and is not the fruit of effort (which paradoxically would lead to subjecting it to the same tax schedule as labour income, while forgetting the IS tax and inflation).

The problem lies above all in schemes that allow tax avoidance. For many years, the banks and insurance companies managed to convince the public authorities that it was necessary to make income from household financial capital tax exempt. Two arguments were advanced: to prevent the wealthy from moving their capital abroad; and to promote long-term savings and high-risk savings. Exemptions were thus made for PEA funds, PEP funds, and UCITS mutual funds. Governments are gradually pulling back from these exemptions. Two principles should be reaffirmed: first, all capital income should be subject to taxation, and tax evasion should be combated by European  agreements on harmonizing tax systems; and second, it is the responsibility of issuers to convince investors of the value of the investments they offer – the State should not fiscally favour any particular type of investment.

There remains the possibility that wealthy families will succeed in avoiding taxes on capital gains through donations to children (alive or upon their death) or by moving abroad before taxation takes place. Thus, a wealthy shareholder can hold his securities in an ad hoc company that receives his dividends and use the company securities as collateral for loans from the bank, which then provides him the money needed to live. The shareholder thus does not declare this income and then passes on the company securities to his children, meaning that the dividends and capital gains he has received are never subjected to income tax.

The other black hole in the tax system lies in the non-taxation of imputed rent. It is not fair that two families with the same income pay the same tax if one has inherited an apartment while the other must pay rent: their ability to pay is very different.

Two measures thus appear desirable. One is to eliminate all schemes that help people avoid the taxation of capital gains, and in particular to ensure the payment of tax on any unrealized capital gains in the case of transmission by inheritance or donation or when moving abroad. The second would be gradually to introduce a tax on imputed rent, for example by charging CSG / CRDS tax and social security contributions to homeowners.

Having done this, a policy choice would be needed:

–         Either to eliminate the ISF wealth tax, as all income from financial and property capital would clearly be taxed at 60%.

–         Or to consider that it is normal for large estates to contribute as such to the running costs of society, regardless of the income the estates provide. With this in mind, the ISF tax would be retained, without comparing the amount of the ISF to the income from the estate, since the purpose of the ISF would be precisely to demand a contribution from the assets themselves.

 

 




Women’s Day

On the occasion of 8 March, we would like to remind our readers that, together with Sciences-Po, the OFCE has developed the specialist Research Programme for Teaching and Knowledge on Gender Issues (PRESAGE).

A number of posts on this blog have taken up the subject of occupational equality between men and women.

 




Is our health system in danger? Reorienting the reform of health management (4/4)

By Gérard Cornilleau

Health is one of the key concerns of the French. Yet it has not been a major topic of political debate, probably due to the highly technical nature of the problems involved in the financing and management of the health care system. An OFCE note presents four issues that we believe are crucial in the current context of a general economic crisis: the last major concern about the health system is hospital financing. This underwent severe change in 2005 with the launch of the T2A system, which reintroduced a direct financial relationship between the activity of the hospitals and their financial resources. It has reinforced the importance and power of the “managers”, which could give the impression that hospitals were henceforth to be regarded as undertakings subject to the dictates of profitability.

The reality is more complex, as the T2A system is aimed less at making hospitals “profitable” than at rationalizing the way expenditure is distributed among the hospitals by establishing a link between their revenue and their activity, as measured by the number of patients cared for weighted by the average cost of treating each patient. Paradoxically, the risk of this type of financing is that it could lead to a rise in spending by encouraging the multiplication of treatments and actions. In fact, the HCAAM report for 2011 (op. cit.) notes that the 2.8% growth in hospital fee-for-service expenditures in 2010 can be broken down into a 1.7% increase attributable to an increase in the number of stays and a 1.1% increase attributable to a “structural effect” linked to a shift in activity towards better reimbursed treatments [1].

This development is worrying, and it could lead to a rise in hospital costs for no reason other than budget needs. The convergence of costs at private clinics and at government and non-profit hospitals is no guarantee against this tendency, as the incentives are not different for private clinics. Here we are reaching the limits of management by competition, even in a notional form, as its flaws are too numerous for it to be the only means of regulation and management.

Public hospitals also receive lump-sum allocations to carry out the general interest and training missions assigned to them. This lump-sum envelope represented approximately 14% of their actual budget in 2010 [2]. It provides funding for teaching and research in the hospitals, participation in public health actions, and the management of specific populations such as patients in difficult situations. Unlike reimbursements related to the application of the fee schedule, the amounts of the corresponding budgets are restrictive and easy to change.

Consequently, budget adjustments are often based on setting aside a portion of these allocations and revising the amounts allocated based on changes in total hospital expenditure. In 2010, for instance, the overrun of the spending target set for the hospitals that year, estimated at 567 million euros, resulted in a 343 million euro reduction in the budget allocated to the general interest mission, or an adjustment of about -4.2% from the original budget (HCAAM, 2011).

The regulation of hospital expenditure has tended to focus on the smallest budget share, which is also the easiest for the central authorities to control. While it is possible to revise the reimbursement rates of the T2A fee schedule, this takes time to affect the budget and the targets are harder to hit. The system for managing hospital budgets is thus imperfect, and it runs the dual risk of uncontrolled slippage on expenditures governed by the T2A system and a drying up of the budget envelopes used to finance expenditures that do not give rise to any billing. There is no magic bullet for this problem: returning to the previous system of a total budget to finance total expenditure would obviously not be satisfactory when the T2A system has made improvements in the link between hospital activity and financing; nor is it acceptable to keep putting the burden of any budget adjustments solely on the budget envelopes of the general interest and investment missions, especially in a period of austerity. The general trend is to minimize the scope of the lump-sum funding envelope (Jégou, 2011) and to maximize the scope of fee-for-service charging.

Pricing is not, however, always perfectly suited to the management of chronic complex conditions. One could therefore ask whether, conversely, the establishment of a mixed rate system of reimbursement, including a component that is fixed and proportional, would not be more effective, while facilitating the overall regulation of the system as a whole by means of a larger lump-sum envelope. The fixed part could for example be determined on the basis of the population covered (as was the case in the old system of an overall budget). This development would also have the advantage of reducing the obsessive managerial spirit that seems to have contributed significantly to the deterioration of the working atmosphere in the hospitals.

 


[1] The patients treated by the hospital are classified into a Groupe Homogène de Malade (GHM, a diagnosis-related group) based on the diagnosis. For each stay of a given patient, the hospital is paid on the basis of a fee set in the Groupe Homogène de Séjours (GHS, a stay-related group), which refers to the patient’s GHM and to the treatment that they receive. In theory this system can associate an “objective” price with the patient treated. In practice, the classification into a GHM and GHS is very complex, particularly when multiple pathologies are involved, and the classification process can be manipulated. As a result, it is impossible to determine precisely whether the shift towards more expensive GHS classifications reflects a worsening of cases, the manipulation of the classifications, or the selection of patients who are “more profitable”.

[2] The credits, called “MIGAC” (for general interest missions and aid to contracting), came to 7.8 billion euros in 2010 out of total hospital expenditure in the “MCO” field (Medicine, Surgery, Obstetrics, Dentistry) of 52.7 billion; see HCAAM, 2011.

 

 




Is our health system in danger? Reforming the reimbursement of care (3/4)

By Gérard Cornilleau

Health is one of the key concerns of the French. Yet it has not been a major topic of political debate, probably due to the highly technical nature of the problems involved in the financing and management of the health care system. An OFCE note presents four issues that we believe are crucial in the current context of a general economic crisis: the third issue, presented here, concerns the reimbursement of health care, in particular long-term care, and the rise in physician surcharges.

The reimbursement of care by the French Social Security system currently varies with the severity of the illness: long-term care, which corresponds to more serious conditions, is fully reimbursed, whereas the reimbursement of routine care is tending to diminish due to a variety of non-reimbursed fixed fees and their tendency to rise. In addition to this structural upwards trend there is a rise in non-reimbursed doctor surcharges, which is reducing the share of expenditure financed by Social Security. As a result, the share of routine care covered by health insurance is limited to 56.2%, while the rate of reimbursement for patients with long-term illnesses (“ALD” illnesses in French) is 84.8% for primary care ​​[1]. This situation has a number of negative consequences: it can lead people to forego certain routine care, with negative implications for the prevention of more serious conditions; and it increases the cost of supplementary “mutual” insurance that paradoxically is taxed to help compulsory insurance on the grounds of the high public coverage for long-term illness. Finally, it puts the focus on the definition of the scope of long-term illness, which is complicated since in order to draw up the list of conditions giving entitlement to full reimbursement it is necessary to consider both the measurement of the “degree” of severity and the cost of treatment. The issue of multiple conditions and their simultaneous coverage by health insurance under both routine care and long-term illness is a bureaucratic nightmare that generates uncertainty and expenditure on relatively ineffective management and controls.

This is why some suggest replacing the ALD system by setting up a health shield that would provide for full reimbursement of all spending above a fixed annual threshold. Beyond a certain threshold of average out-of-pocket expenses (e.g. corresponding to the current “co-payment” level) after reimbursement by compulsory health insurance, which was about 500 euros per year in 2008[2]), Social Security would assume full coverage. A system like this would provide automatic coverage of the bulk of expenses associated with serious diseases without going through the ALD classification.

One could consider modulating the threshold of out-of-pocket expenses based on income (Briet and Fragonard, 2007) or the reimbursement rate, or both. This possibility is typically invoked to limit the rise in reimbursed expenses. This raises the usual problem of the support of better-off strata for social insurance when it would be in their interest to support the pooling of health risks through private insurance with fees proportional to the risk rather than based on income.

The establishment of a health shield system also raises the issue of the role of supplementary insurance. Historically mutual insurance funds “completed” public coverage by providing complete or nearly complete coverage of anything in the basket of care not reimbursed by basic health insurance (dental prostheses, eyeglass frames, sophisticated optical care, private hospital rooms, etc.). Today these funds function increasingly as “supplementary” insurance that complements public insurance for the reimbursement of health expenses on the whole (coverage of the patient co-payment, partial refund of doctor surcharges). The transition to a health shield system would limit their scope of reimbursement to expenses below the fixed threshold. It is often assumed that if mutual insurance were to abandon its current role of blind co-payment of care expenditures, it could play an active role in promoting prevention, for example, by offering differential premiums based on the behaviour of the insured [3]. But where would their interests lie if the shield came to limit their coverage beyond the threshold not covered by public insurance? Even in the case of maintaining a substantial “co-payment” beyond the threshold because of doctor surcharges, for example, they would undoubtedly remain relatively passive, and there would not be much change from the situation today, which isolates them from the bulk of coverage for serious and expensive diseases.

A system in which public insurance alone provides support for a clearly defined basket of care is surely better: this would require that the health shield increases with income, with the poorest households receiving full coverage from the first euro. If affluent households decide to self-insure for expenses below the threshold (which is likely if the latter is less than 1000 euros per year), the mutual insurance funds might withdraw almost entirely from coverage of reimbursements of routine care expenses. On the other hand, they could concentrate on the coverage of expenditures outside the field of public health insurance, which in practice would mean dental prostheses and corrective optics. They could intervene more actively than now in these fields to structure health care delivery and supplies. Their role as principal payer in these fields would justify delegating them the responsibility of dealing with the professions involved. However, this solution implies that a system of public coverage would be needed to give the poorest strata access to care not covered by the public insurance system (in a form close to France’s current CMU universal coverage system, which should however be extended and made more progressive ). There is thus no simple solution to the question of the relationship between public insurance and supplementary private insurance.

The merger of the two systems should also be considered, which in practice means the absorption of the private by the public. This would have the advantage of simplifying the system as a whole, but would leave partially unresolved the question of defining the basket of care covered. It is quite likely that supplementary insurance would relocate to the margins of the system to support incidental expenses not covered by the public system because they are deemed nonessential. The reimbursement of health costs should certainly remain mixed, but it is urgent to reconsider the boundaries between private and public, otherwise the trend towards declining public coverage will gain strength at the expense of streamlining the system and of equity in the coverage of health expenditures.

 


[1] In 2008. This is a level of coverage that excludes optical. Taking optical into account, the rate of coverage by health insurance falls to 51.3% (Haut Conseil pour l’Avenir de l’Assurance Maladie  [High Council for the Future of Health Insurance], December 2011).

[2] HCAAM, 2011 (ibid).

[3] It is not easy to take into account the behaviour of the insured. Beyond the use of preventive examinations, which can be measured relatively easily, other preventive behaviours are difficult to verify. Another risk inherent in private insurance is that insurers “skim” the population: to attract “good” clients, coverage is provided of expenditures that are typical of lower-risk populations (for example, the use of “alternative” medicines), while using detailed medical questionnaires to reject expenditures for greater risks.