Posts Tagged Microfoundations

The Crisis & Economics, Part 5: “Shhh! We’re Working On It”

This is part 5 in my series on how the financial crisis is relevant for economics (parts 1, 2, 3 & 4 are here). Each part explores an argument economists have made against the charge that the crisis exposed fundamental failings of their discipline. This post explores the possibility that macroeconomics, even if it failed before the crisis, has responded to its critics and is moving forward.

Argument #5: “We got this one wrong, sure, but we’ve made (or are making) progress in macroeconomics, so there’s no need for a fundamental rethink.”

Many macroeconomists deserve credit for their mea culpa and subsequent refocus following the financial crisis. Nevertheless, the nature of the rethink, particularly the unwillingness to abandon certain modelling techniques and ideas, leads me to question whether progress can be made without a more fundamental upheaval. To see why, it will help to have a brief overview of how macro models work.

In macroeconomic models, the optimisation of agents means that economic outcomes such as prices, quantities, wages and rents adjust to the conditions imposed by input parameters such as preferences, technology and demographics. A consequence of this is that sustained inefficiency, unemployment and other chaotic behaviour usually occur when something ‘gets in the way’ of this adjustment. Hence economists introduce ad hoc modifications such as sticky prices, shocks and transaction costs to generate sub-optimal behaviour: for example, if a firm’s cost of changing prices exceeds the benefit, prices will not be changed and the outcome will not be Pareto efficient. Since there are countless ways in which the world ‘deviates’ from the perfectly competitive baseline, it’s mathematically troublesome (or impossible) to include every possible friction. The result is that macroeconomists tend to decide which frictions are important based on real world experience: since the crisis, the focus has been on finance. On the surface this sounds fine – who isn’t for informing our models with experience? However, it is my contention that this approach does not offer us any more understanding than would experience alone.

Perhaps an analogy will illustrate this better. I was once walking past a field of cows as it began to rain, and I noticed some of them start to sit down. It occurred to me that there was no use them doing this after the storm started; they are supposed to give us adequate warning by sitting down before it happens. Sitting down during a storm is just telling us what we already know. Similarly, although the models used by economists and policy makers did not predict and could not account for the crisis before it happened, they have since built models that try to do so. They generally do this by attributing the crisis to frictions that revealed themselves to be important during the crisis. Ex post, a friction can always be found to make models behave a certain way, but the models do not make identifying the source of problems before they happen any easier, and they don’t add much afterwards, either – we certainly didn’t need economists to tell us finance was important following 2008. In other words, when a storm comes, macroeconomists promptly sit down and declare that they’ve solved the problem of understanding storms.  It becomes difficult to escape the circularity of defining the relevant friction by its outcome, hence stripping the idea of ‘frictions’ of predictive power or falsifiability.

There is also the open question of whether understanding the impact of a ‘friction’ relative to a perfectly competitive baseline entails understanding its impact in the real world. As theorists from Joe Stiglitz to Yanis Varoufakis have argued, neoclassical economics is trapped in a permanent fight against indeterminacy: the quest to understand things relative to a perfectly competitive, microfounded baseline leads to aggregation problems and intractable complexities that, if included, result in “anything goes” conclusions. To put in another way, the real world is so complex and full of frictions that whichever mechanics would be driving the perfectly competitive model are swamped. The actions of individual agents are so intertwined that their aggregate behaviour cannot be predicted from each of their ‘objective functions’. Subsequently, our knowledge of the real world must be informed by either models which use different methodologies or, more crucially, by historical experience.

Finally, the ad hoc approach also contradicts another key aspect of contemporary macroeconomics: microfoundations. The typical justification for these is that, to use the words of the ECB, they impose “theoretical discipline” and are “less subject to the Lucas critique” than a simple VAR, Old Keynesian model or another more aggregative framework. Yet even if we take those propositions to be true, the modifications and frictions that are so crucial to making the models more realistic are often not microfounded, sometimes taking the form of entirely arbitrary, exogenous constraints. Even worse is when the mechanism is profoundly unrealistic, such as prices being sticky because firms are randomly unable to change them for some reason. In other words, macroeconomics starts by sacrificing realism in the name of rigour, but reality forces it in the opposite direction, and the end result is that it has neither.

Macroeconomists may well defend their approach as just a ‘story telling‘ approach, from which they can draw lessons but which isn’t meant to hold in the same manner as engineering theory. Perhaps this is defensible in itself, but (a) personally, I’d hope for better and (b) in practice, this seems to mean each economists can pick and choose whichever story they want to tell based on their prior political beliefs. If macroeconomists are content conversing in mathematical fables, they should keep these conversations to themselves and refrain from forecasting or using them to inform policy. Until then, I’ll rely on macroeconomic frameworks which are less mathematically ‘sophisticated’, but which generate ex ante predictions that cover a wide range of observations, and which do not rely on the invocation of special frictions to explain persistent deviations from these predictions.

, , , ,

12 Comments

Debunking Economics, Part VIII: Macroeconomics, or Applied Microeconomics?

Chapter 10 of Steve Keen’s Debunking Economics explores the reduction of macroeconomics to ‘applied microeconomics:’ representative agents, the macroeconomic supply/demand (IS-LM), Say’s Law and more. The Chapter is aptly titled ‘Why They Didn’t See It Coming’ – the reason, of course, being that the very premises of their models assumed away major episodes of instability.

Say’s Law

Say’s Law is the proposition – first put forward by its namesake, Jean-Baptiste Say – that:

Every producer asks for money in exchange for his products, only for the purpose of employing that money again immediately in the purchase of another product.

In other words: money is neutral, and the economy operates as if people are directly bartering goods between one another. Whilst individual markets may not clear, there cannot be a net deficiency of demand in all markets, and employment is largely voluntary, save perhaps that induced by ‘frictions’ as markets adjust. Say’s Law is rarely referenced explicitly by modern neoclassical economics, but it still lives on at the heart of many models – for example, the ‘equilibrium’ in Dynamic Stochastic General Equilibrium (DSGE) models assumes that all markets clear.

Keen notes that Keynes’ own formulation and refutation of Say’s Law was clumsy and turgid. Instead, Keen opts for Marx’s critique, which was far more concise and lucid. Keynes actually included Marx’s critique in his 1933 draft of The General Theory, but eventually eliminated it, probably for political reasons.

Say’s Law relies on a simple claim: the structure of a market economy is Commodity-Money-Commodity (C-M-C), where people primarily desire commodities and only hold money for want of another commodity. But Marx pointed out that, under capitalism, there are a group of people who quite clearly do not fit this formulation. These people are called capitalists.

Capitalist production does not take the form of C-M-C, but of M-C-M: a capitalist will invest money in production in the hope of accumulating more. As Marx put it, “[the capitalist’s] aim is not to equalise his supply and demand, but to make the inequality between them as great as possible.” Say’s Law could be said to apply in a productionless economy, but capitalism is characterised by the value or quantity of produced goods and services exceeding the value or quantity of the inputs. Hence, there will always be a surplus of money needed to satisfy capitalist accumulation, and the economy will continually be characterised by excess demand for money, and hence insufficient demand for commodities.

Keen continues by noting the obvious accounting reality that, in order for the economy to expand, credit must fill this gap, and quotes both Schumpeter and Minsky saying the same. This, along with the logic of capital accumulation, is a major spanner in the works for Say’s Law. However, I will not explore the ‘credit gap’ any further here, as Keen goes into far more detail in later chapters.

IS/LM

IS/LM is a diagram that looks a lot like demand supply, and proposes that the interest rate and level of output in an economy are determined by two schedules: the equilibriums between the different levels of investment and saving, and the equilibriums between the money supply and the desire to hold money (liquidity preference). It was originally proposed as an interpretation of Keynes’ General Theory by Hicks in his 1937 review of the book, entitled Keynes and the Classics.

There are many problems with IS/LM. The model was a complete misinterpretation of Keynes, and basically an attempt to pass off Hicks’ own model – that was developed independently from Keynes* – as Keynes’ model. Hicks himself pointed out many of the substantive problems in his 1980 ‘explanation’ (Keen suggests it is really an apology).

The major problems are uncertainty and changing expectations. Hicks’ formulation of IS/LM uses a period of about a week, during which it is reasonable to suppose that expectations are constant. But if expectations are constant and therefore not uncertain, there is no room for liquidity preference, which Keynes justified as “a barometer of the degree of our distrust of our own calculations a conventions concerning the future.” So we must extend the time period.

Keynes’ original intent for the time period of his analysis was the ‘Marshallian’ definition of a short period – about a year. The problem is that at this point equilibrium analysis falls apart. Both curves are partially derived from expectations, and once these start changing, the curves are constantly shifting. Not only this, but since they both depend on expectations, a movement in one will affect the other, and they can no longer move independently.**

Ultimately, IS/LM reduced Keynes to a call for fiscal stimulus in the ‘special case’ that the LM curve was flat or close to flat (demand for money is ‘very high’ or infinite). ‘Later Hicks’ argued that the model should really not be intended as anything other than a “classroom gadget” – we might consider it a heuristic assumption, later to be replaced by something else (it is, in fact, replaced by DSGE past the undergraduate level). Personally I’m not sure that a model with internal inconsistencies should be used as a heuristic (and neither is Keen), and to be honest students have enough trouble understanding IS/LM that I don’t even think it qualifies as a potent tool for communication. In any case, we certainly don’t want to be referring to it in policy discussions.

Macroeconomics after IS/LM

The neoclassical economists didn’t like IS/LM either, but that was because it was not built up from the point of view of optimising microeconomic agents. Keen catalogues the ‘Rational Expectations‘ overthrowing of IS/LM and the ‘Keynesians,’ making the obvious observation that the idea people can, on average, predict the future, is stupid, and ironically completely fails to take into account Keynes’ concept of uncertainty. He notes that, while the broad thrust of the Lucas Critique is correct, it does not justify the idea that a policy change will be completely neutralized by changes in behaviour, and neither does it justify reductionism – microeconomic models are as ‘vulnerable’ to the critique as macroeconomic ones.

Keen then documents that the first attempt to model macroeconomics based on the ‘revelations’ of what he calls the ‘rational expectations mafia:’ Real Business Cycle models. Again, the original author of these models – Bob Solow – later repudiated them. In his words:

What emerged was not a good idea. The preferred model has a single representative consumer optimizing over infinite time with perfect foresight or rational expectations, in an environment that realizes the resulting plans more or less flawlessly through perfectly competitive forward-looking markets for goods and labor, and perfectly flexible prices and wages.  How could anyone expect a sensible short-to-medium-run macroeconomics to come out of that set-up?

This is obviously ridiculous. There have, of course, been developments since the core RBC model was invented – the New Keynesian DSGE models include elements such as sticky prices, bounded rationality, imperfect (though as far as I know, not  asymmetric) information and oligopolistic market structures. However, all of these preserve the neoclassical core of preference driven individualism, assume equilibrium, and keep one or two representative agents (any more and many of the core assumptions fall apart, in a clear and ironic example of emergent properties). The models also suppose that the economy has ‘underlying‘ tendencies towards stability, masked only by the pesky aforementioned real world ‘imperfections.’

Even despite all these developments, neoclassical economists continued to be led to absurd conclusions, such as the idea that unemployment during the Great Depression was voluntary (Prescott), the recession predated the collapse of the housing bubble (Fama), and blaming business cycles on the Fed suddenly deviating from its previous mandate for no reason (Taylor, Sumner, Friedman). And, of course, none of them foresaw the crisis and can only model it with some serious post-hoc ad-hocery.

It’s worth noting that representative agents, in and of themselves, are not a problem – the problem is that neoclassicism must stick to a small amount to preserve its assumptions, and for some reason refuses to use class as a distinction. Keen’s own models could be said to use representative agents, but the fact that he doesn’t build his model up from microfoundations means that he is far less hamstrung when adding new aspects and dynamics to the model. The idea that the macroeconomy cannot be studied separately from the microeconomy is deeply ascientific – the kind of thing that the real sciences learned to abandon long ago. It’s time economists caught up.

*Actually it was developed largely in opposition to Keynes – by Hicks, Dennis Robertson and others.

**Readers might notice a similarity between this ‘small scale versus large scale’ critique of IS/LM and Piero Sraffa’s argument against diminishing marginal returns.

, , , , , , ,

54 Comments