Posts Tagged Criticisms of neoclassicism

How Not to Do Macroeconomics

A frustrating recurrence for critics of ‘mainstream’ economics is the assertion that they are criticising the economics of bygone days: that those phenomena which they assert economists do not consider are, in fact, at the forefront of economics research, and that the critics’ ignorance demonstrates that they are out of touch with modern economics – and therefore not fit to criticise it at all.

Nowhere is this more apparent than with macroeconomics. Macroeconomists are commonly accused of failing to incorporate dynamics in the financial sector such as debt, bubbles and even banks themselves, but while this was true pre-crisis, many contemporary macroeconomic models do attempt to include such things. Reputed economist Thomas Sargent charged that such criticisms “reflect either woeful ignorance or intentional disregard for what much of modern macroeconomics is about and what it has accomplished.” So what has it accomplished? One attempt to model the ongoing crisis using modern macro is this recent paper by Gauti Eggertsson & Neil Mehrotra, which tries to understand secular stagnation within a typical ‘overlapping generations’ framework. It’s quite a simple model, deliberately so, but it helps to illustrate the troubles faced by contemporary macroeconomics.

The model

The model has only 3 types of agents: young, middle-aged and old. The young borrow from the middle, who receive an income, some of which they save for old age. Predictably, the model employs all the standard techniques that heterodox economists love to hate, such as utility maximisation and perfect foresight. However, the interesting mechanics here are not in these; instead, what concerns me is the way ‘secular stagnation’ itself is introduced. In the model, the limit to how much young agents are allowed to borrow is exogenously imposed, and deleveraging/a financial crisis begins when this amount falls for unspecified reasons. In other words, in order to analyse deleveraging, Eggertson & Mehrotra simply assume that it happens, without asking why. As David Beckworth noted on twitter, this is simply assuming what you want to prove. (They go on to show similar effects can occur due to a fall in population growth or an increase in inequality, but again, these changes are modelled as exogenous).

It gets worse. Recall that the idea of secular stagnation is, at heart, a story about how over the last few decades we have not been able to create enough demand with ‘real’ investment, and have subsequently relied on speculative bubbles to push demand to an acceptable level. This was certainly the angle from which Larry Summers and subsequent commentators approached the issue. It’s therefore surprising – ridiculous, in fact – that this model of secular stagnation doesn’t include banks, and has only one financial instrument: a risk-less bond that agents use to transfer wealth between generations. What’s more, as the authors state, “no aggregate savings is possible (i.e. there is no capital)”. Yes, you read that right. How on earth can our model understand why there is not enough ‘traditional’ investment (i.e. capital formation), and why we need bubbles to fill that gap, if we can have neither investment nor bubbles?

Naturally, none of these shortcomings stop Eggertson & Mehrotra from proceeding, and ending the paper in economists’ favourite way…policy prescriptions! Yes, despite the fact that this model is not only unrealistic but quite clearly unfit for purpose on its own terms, and despite the fact that it has yielded no falsifiable predictions (?), the authors go on give policy advice about redistribution, monetary and fiscal policy. Considering this paper is incomprehensible to most of the public, one is forced to wonder to whom this policy advice is accountable. Note that I am not implying policymakers are puppets on the strings of macroeconomists, but things like this definitely contribute to debate – after all, secular stagnation was referenced by the Chancellor in UK parliament (though admittedly he did reject it). Furthermore, when you have economists with a platform like Paul Krugman endorsing the model, it’s hard to argue that it couldn’t have at least some degree of influence on policy-makers.

Now, I don’t want to make general comments solely on the basis of this paper: after all, the authors themselves admit it is only a starting point. However, some of the problems I’ve highlighted here are not uncommon in macro: a small number of agents on whom some rather arbitrary assumptions are imposed to create loosely realistic mechanics, an unexplained ‘shock’ used to create a crisis. This is true of the earlier, similar paper by Eggertson & Krugman, which tries to model debt-deflation using two types of agents: ‘patient’ agents, who save, and ‘impatient agents’, who borrow. Once more, deleveraging begins when the exogenously imposed constraint on the patient agent’s borrowing falls For Some Reason, and differences in the agents’ respective consumption levels reduce aggregate demand as the debt is paid back. Again, there are no banks, no investment and no real financial sector. Similarly, even the far more sophisticated Markus K. Brunnermeier & Yuliy Sannikov - which actually includes investment and a financial sector – still only has two agents, and relies on exogenous shocks to drive the economy away from its steady-state.

Whither macroeconomics?

Why do so many models seem to share these characteristics? Well, perhaps thanks to the Lucas Critique, macroeconomic models must be built up from optimising agents. Since modelling human behaviour is inconceivably complex, mathematical tractability forces economists to make important parameters exogenous, and to limit the number (or number of types) of agents in the model, as well as these agents’ goals & motivations. Complicated utility functions which allow for fairly common properties like relative status effects, or different levels of risk aversion at different incomes, may be possible to explore in isolation, but they’re not generalisable to every case or the models become impossible to solve/indeterminate. The result is that a model which tries to explore something like secular stagnation can end up being highly stylised, to the point of missing the most important mechanics altogether. It will also be unable to incorporate other well-known developments from elsewhere in the field.

This is why I’d prefer something like Stock-Flow Consistent models, which focus on accounting relations and flows of funds, to be the norm in macroeconomics. As economists know all too well, all models abstract from some things, and when we are talking about big, systemic problems, it’s not particularly important whether Maria’s level of consumption is satisfying a utility function. What’s important is how money and resources move around: where they come from, and how they are split – on aggregate – between investment, consumption, financial speculation and so forth. This type of methodology can help understand how the financial sector might create bubbles; or why deficits grow and shrink; or how government expenditure impacts investment. What’s more, it will help us understand all of these aspects of the economy at the same time. We will not have an overwhelming number of models, each highlighting one particular mechanic, with no ex ante way of selecting between them, but one or a small number of generalisable models which can account for a large number of important phenomena.

Finally, to return to the opening paragraph, this paper may help to illustrate a lesson for both economists and their critics. The problem is not that economists are not aware of or never try to model issue x, y or z. Instead, it’s that when they do consider x, y or z, they do so in an inappropriate way, shoehorning problems into a reductionist, marginalist framework, and likely making some of the most important working parts exogenous. For example, while critics might charge that economists ignore mark-up pricing, the real problem is that when economists do include mark-up pricing, the mark-up is over marginal rather than average cost, which is not what firms actually do. While critics might charge that economists pay insufficient attention to institutions, a more accurate critique is that when economists include institutions, they are generally considered as exogenous costs or constraints, without any two-way interaction between agents and institutions. While it’s unfair to say economists have not done work that relaxes rational expectations, the way they do so still leaves agents pretty damn rational by most peoples’ standards. And so on.

However, the specific examples are not important. It seems increasingly clear that economists’ methodology, while it is at least superficially capable of including everything from behavioural economics to culture to finance, severely limits their ability to engage with certain types of questions. If you want to understand the impact of a small labour market reform, or how auctions work, or design a new market, existing economic theory (and econometrics) is the place to go. On the other hand, if you want to understand development, historical analysis has a lot more to offer than abstract theory. If you want to understand how firms work, you’re better off with survey evidence and case studies (in fairness, economists themselves have been moving some way in this direction with Industrial Organisation, although if you ask me oligopoly theory has many of the same problems as macro) than marginalism. And if you want to understand macroeconomics and finance, you have to abandon the obsession with individual agents and zoom out to look at the bigger picture. Otherwise you’ll just end up with an extremely narrow model that proves little except its own existence.

 

, , , ,

20 Comments

Yes, The Cambridge Capital Controversies Matter

I rarely (never) post based solely on a quick thought or quote, but this just struck me as too good not to highlight. It’s from a book called ‘Capital as Power’ by Jonathan Nitzan and Shimshon Bichler, which challenges both the neoclassical and Marxian conceptions of capital, and is freely available online. The passage in question pertains to the way neoclassical economics has dealt with the problems highlighted during the well documented Cambridge Capital Controversies:

The first and most common solution has been to gloss the problem over – or, better still, to ignore it altogether. And as Robinson (1971) predicted and Hodgson (1997) confirmed, so far this solution seems to be working. Most economics textbooks, including the endless editions of Samuelson, Inc., continue to ‘measure’ capital as if the Cambridge Controversy had never happened, helping keep the majority of economists – teachers and students – blissfully unaware of the whole debacle.

A second, more subtle method has been to argue that the problem of quantifying capital, although serious in principle, has limited practical importance (Ferguson 1969). However, given the excessively unrealistic if not impossible assumptions of neoclassical theory, resting its defence on real-world relevance seems somewhat audacious.

The second point is something I independently noticed: appealing to practicality when it suits the modeller, but insisting it doesn’t matter elsewhere. If there is solid evidence that reswitching isn’t important, that’s fine, but then we should also take on board that agents don’t optimise, markets don’t clear, expectations aren’t rational, etc. etc. If we do that, pretty soon the assumptions all fall away and not much is left.

However, it’s the authors’ third point that really hits home:

The third and probably most sophisticated response has been to embrace disaggregate general equilibrium models. The latter models try to describe – conceptually, that is – every aspect of the economic system, down to the smallest detail. The production function in such models separately specifies each individual input, however tiny, so the need to aggregate capital goods into capital does not arise in the first place.

General equilibrium models have serious theoretical and empirical weaknesses whose details have attracted much attention. Their most important problem, though, comes not from what they try to explain, but from what they ignore, namely capital. Their emphasis on disaggregation, regardless of its epistemological feasibility, is an ontological fallacy. The social process takes place not at the level of atoms or strings, but of social institutions and organizations. And so, although the ‘shell’ called capital may or may not consist of individual physical inputs, its existence and significance as the central social aggregate of capitalism is hardly in doubt. By ignoring this pivotal concept, general equilibrium theory turns itself into a hollow formality.

In essence, neoclassical economics dealt with its inability to model capital by…eschewing any analysis of capital. However, the theoretical importance of capital for understanding capitalism (duh) means that this has turned neoclassical ‘theory’ into a highly inadequate took for doing what theory is supposed to do, which is to further our understanding.

Apparently, if you keep evading logical, methodological and empirical problems, it catches up with you! Who knew?

, , ,

63 Comments

18 Signs Economists Haven’t the Foggiest

I’d like to thank Chris Auld for giving me a format for outlining the major reasons why economists can be completely out of touch with their public image, as well as how they should do “science”, and why their discipline is so ripe for criticism (most of which they are unaware of). So, here are 18 common failings I encounter time and time again in my discussions with mainstream economists:

1. They defer to the idea that “all models are simplifications” as if this somehow creates a fireguard against any criticism of methodology, internal inconsistency or empirical relevance.

2. They argue that the financial crisis is irrelevant to their discipline (bonus: also that predicting such events is impossible).

3. They think that behavioural, new institutional and even ‘Keynesian’ economics show the discipline is pluralistic, not neoclassical.

4. They think that the fact most economic papers are “empirical” shows economists are engaging in the scientific method.

5. They think ‘neoclassical economics‘ doesn’t exist and is just a swear word used by their opponents.

6. When pushed, they collapse their theories and assumptions into ridiculously weak, virtually unfalsifiable claims (such as revealed preference, the Efficient Markets Hypothesis, or rationality).

7. They dismiss ideas from the past or comprehensive study of previous thinkers and texts as “not science”.

8. They think positive and normative economics are 100% separable, and their discipline is “value free“.

9. They simply cannot think of any other approach to ‘economics’ than theirs.

10. They believe in an erroneous history that sits well with their pet theories, such as the myths of barter and free trade.

11. They think that microfoundations are a necessary and sufficient modelling technique for dealing with the Lucas Critique.

12. They think economics is separable from politics, and that the political role and application of economic ideas in the real world is irrelevant for academic discussion (examples: Friedman and Pinochet, central bank independence).

13. They think their discipline is going through a calm, fruitful period (based on their self-absorbed bubble).

14. They think that endorsing cap & trade or carbon taxes is “dealing with the environment”.

15. They think making an unrealistic model consistent with one or two observed phenomena makes it sound or worthwhile (DSGE and other models are characterised by this “frictions” approach).

16. They think their discipline is an adequate, even superior, method for analysing problems in other social sciences such as politics, history and sociology.

17. They think that the world behaves as if their assumptions are true (or close enough).

18. They think that their discipline’s use of mathematics shows that it is “rigorous” and scientific.

Every above link that is not written by an economist is recommended. Furthermore, here are some related recommendations: seven principles for arguing with economists; my FAQ for mainstream economists; I Could Be Arguing In My Spare Time (footnotes!); What’s Wrong With Economics? Also try both mine and Matthjus Krul’s posts on how not to criticise neoclassical economics. As I say to Auld in the comments, I actually agree with some of his points about the mistakes critics make. But I think these critics are still criticising economics for good reasons, and that economists need to improve on the above if they want anyone other than each other to continue taking them seriously.

PS If you think I haven’t backed up any of my claims about what economists say, try cross referencing, as some of the links fall into more than one trap. Also follow through to who I’m criticising in the links to my previous posts. And no, I don’t think all economists believe everything here. However, I do think many economists believe some combination of these things.

Addendum: I have received predictable complaints that my examples are straw men, or at least uncommon. Obviously I provided links for each specific claim – if you’d like to charge that said link is not relevant, please explain why, and if you want more, I’m happy to provide them. However, my general claim is simply that a given article trying to expound or defend mainstream economics will commit a handful of these errors, perhaps excluding the more specific ones such as history or carbon taxes. Here are some examples to show how pervasive this mindset is:

Auld’s original article commits 2, 3, 4, 5 & 12.

This recent, popular defense of economics as a science in the NYT commits 2, 4, 8 & 13 (NB: I forgot “makes annoying and inappropriate comparisons to other sciences”, although both sides do this).

Greg Mankiw’s response to the econ101 walkout commits 8, 9, 12 & 13.

This recent ‘critique‘ of Debunking Economics commits 9, 11, 15 (though, to its credit, it avoids 2).

Stephen Williamson manages 2, 6, 7, 8, 9, 11, 12, 13, 15 & 16 in his reviews of John Quiggin’s Zombie Economics (in fact, Williamson is a fantastic source of this stuff in general).

Paul Krugman committed 1, 7, 9 & 15 in his debate with Steve Keen

Dani Rodrik, who is probably the most reasonable mainstream economist in the world, committed 3, 4, 13 & 15 in his discussion of economics.

and so on…

(Note that, in the interest of fairness, I have left out the most ridiculous things I’ve seen since the crisis.)

,

152 Comments

How Economics Sees Reality

Something has been bothering me about the way evidence is (sometimes) used in economics and econometrics:  theories are assumed throughout interpretation of the data. The result is that it’s hard to end up questioning the model being used.

Let me give some examples. The delightful fellas at econjobrumours once disputed my argument that supply curves are flat or slope downward by noting that, yes, Virginia, in conditions where firms have market power (high demand, drought pricing) prices tend to go up. Apparently this “simple, empirical point” suffices to refute the idea that supply curves do anything but slope upward. But this is not true. After all, “supply curves slope downward/upward/wiggle around all over the place” is not an empirical statement. It is an interpretation of empirical evidence that also hinges on the relevance of the theoretical concept of the supply curve itself. In fact, the evidence, taken as whole, actually suggests that the demand-supply framework is at best incomplete.

This is because we have two major pieces of evidence on this matter: higher demand/more market power increases price, and firms face constant or increasing returns to scale. These are contradictory when interpreted within the demand-supply framework, as they imply that the supply curve slopes in different directions. However, if we used a different model – say, added a third term for ‘market power’, or a Kaleckian cost plus model, where the mark up was a function of the “degree of monopoly”, that would no longer be the case. The rising supply curve rests on the idea that increasing prices reflect increasing costs, and therefore cannot incorporate these possibilities.

Similarly, many empirical econometric papers use the neoclassical production function, (recent one here) which states that output is derived from the labour and capital, plus a few parameters attached to the variables, as a way to interpret the data. However, this again requires that we assume capital and labour, and the parameters attached to them, are meaningful, and that the data reflect their properties rather than something else. For example, the volume of labour employed moving a certain way only implies something about the ‘elasticity of substitution’ (the rate at which firms substitute between labour and capital) if you assume that there is an elasticity of substitution. However, the real-world ‘lumpiness‘ of production may mean this is not the case, at least not in the smooth, differentiable way assumed by neoclassical theory.

Assuming such concepts when looking at data means that economics can become a game of ‘label the residual‘, despite the various problems associated with the variables, concepts and parameters used. Indeed, Anwar Shaikh once pointed out that the seeming consistency between the Cobb-Douglas production function and the data was essentially tautological, and so using the function to interpret any data, even the word “humbug” on a graph, would seem to confirm the propositions of the theory, simply because they follow directly from the way it is set up.

Joan Robinson made this basic point, albeit more strongly, concerning utility functions: we assume people are optimising utility, then fit whatever behaviour we observe into said utility function. In other words, we risk making the entire exercise “impregnably circular” (unless we extract some falsifiable propositions from it, that is). Frances Wooley’s admittedly self-indulgent playing around with utility functions and the concept of paternalism seems to demonstrate this point nicely.

Now, this problem is, to a certain extent, observed in all sciences – we must assume ‘mass’ is a meaningful concept to use Newton’s Laws, and so forth. However, in economics, properties are much harder to pin down, and so it seems to me that we must be more careful when making statements about them. Plus, in the murky world of statistics, we can lose sight of the fact that we are merely making tautological statements or running into problems of causality.

The economist might now ask how we would even begin to interpret the medley of data at our disposal without theory. Well, to make another tired science analogy, the advancement of science has often not resulted from superior ‘predictions’, but on identifying a closer representation of how the world works: the go-to example of this is Ptolemy, which made superior predictions to its rival but was still wrong. My answer is therefore the same as it has always been: economists need to make better use of case studies and experiments. If we find out what’s actually going on underneath the data, we can use this to establish causal connections before interpreting it. This way, we can avoid problems of circularity, tautologies, and of trapping ourselves within a particular model.

, , , ,

38 Comments

On Pieria: What’s Wrong With Economics?

My latest article, trying to sum up the problems with economist’s approach – in 3 words, “it’s too narrow”:

The question of whether mainstream (neoclassical) economics as a discipline is fit for purpose is well-trodden ground…

….[I think] economic theory is flawed, not necessarily because it is simply ‘wrong’, but because it is based on quite a rigid core framework that can restrict economists and blind them to certain problems. In my opinion, neoclassical economics has useful insights and appropriate applications, but it is not the only worthwhile framework out there, and economist’s toolkit is massively incomplete as long as they shy away from alternative economic theories, as well as relevant political and moral questions.

As Yanis Varoufakis noted, it is strange how remarkably resilient the neoclassical framework is in the presence of many coherent alternatives and a large number of empirical/logical problems. However, I actually think this is quite normal in science – after all, it is done by humans, not robots. Hopefully things will change eventually and economics will become more comprehensive/pluralistic, as I call for in the article.

It’s good to sum up my overall position, but I think I’ll probably lean more (though not entirely) towards positive approaches from now on, some of which I mention in the article. Though I strongly disagree with Jonathan Catalan that heterodox economists are “more often wrong than right”, I agree with his sentiment that it’s probably better to “sell [one's] ideas” that to endlessly repeat oneself about methodology and so forth. So maybe expect a shift from general criticisms of economics to more positive and targeted approaches!

PS Having said that, my next post definitely doesn’t fit this description.

, ,

26 Comments

Milton Friedman’s Distortions, Part II

I have previously noted that Milton Friedman’s debating techniques and attitude towards facts were, erm, slippery to say the least. However, I focused primarily on his public face, and it seemed he could merely have adopted a more accessible narrative to get his point across, losing some nuance along the way. It could be argued that most are guilty of this, and it didn’t reflect Friedman’s stature as an academic.

Sadly, this is probably not the case. Commenter Jan quotes Edward S. Herman on Friedman’s academic record, giving us reason to believe that Friedman’s approach extended through to his academic work. It appears the man was prepared to conjure ‘facts’ from nowhere, massage data and simply lie to support his theories. With thanks to Jan, I’ll channel some of what Herman says, using it to discuss Friedman’s major academic contributions in general, and how his record seems to be rife with him torturing the facts to fit his theories.

The Permanent Income Hypothesis

The Permanent Income Hypothesis (PIH) states that a consumer’s consumption is not only a function of their current income, but of their lifetime income. Since people tend to earn more as they get older, this means that younger generations will tend to borrow and older generations will tend to save. The PIH has been a key tenet of economic theory since its inception, and Friedman won the Nobel Memorial Prize for it in 1976.

When discussing Friedman’s in-depth empirical treatment of the PIH, Paul Diesing (as quoted by Herman) found it wanting. He listed six ways Friedman manipulated the data:

1. If raw or adjusted data are consistent with PI, he reports them as confirmation of PI
2. If the fit with expectations is moderate, he exaggerates the fit.
3. If particular data points or groups differ from the predicted regression, he invents ad hoc explanations for the divergence.
4. If a whole set of data disagree with predictions, adjust them until they do agree.
5. If no plausible adjustment suggests itself, reject the data as unreliable.
6. If data adjustment or rejection are not feasible, express puzzlement. ‘I have not been able to construct any plausible explanation for the discrepancy’…

It does not surprise me that Friedman had to treat the data this way to get the results he wanted. For the interesting thing about the PIH is that it displaced a model that was far more plausible and empirically relevant: the Relative Income Hypothesis (RIH). The RIH argues that individual consumption patterns are in large part determined by the consumption patterns of those around them, so people consume to “keep up with the Joneses“. It was developed by James Duesenberry in his 1949 book Income, Saving and the Theory of Consumer Behaviour.

In his discussion of this apparent scientific regression, Robert Frank lists 3 major ‘stylised facts’ any theory of consumption must be consistent with:

  • The rich save at higher rates than the poor;
  • National savings rates remain roughly constant as income grows;
  • National consumption is more stable than national income over short periods.

The PIH can easily explain the last two phenomena, as it posits that saving (and therefore consumption) is unrelated to current income. However, this same proposition required Friedman to dismiss the first phenomenon outright. He therefore suggested that the high savings rates of the rich resulted from windfall gains rather than income. A neat hypothesis, but unsubstantiated by the evidence: savings rates also rise with increases in lifetime income.

Conversely, Duesenberry’s theory is well equipped to explain all three of the listed phenomena. The RIH implies that poor consume a higher percentage of their income to keep up with the consumption of the rich. As society as a whole becomes richer, this phenomenon will not disappear, as the poor will still be relatively poor. Thus, the apparently contradictory first two points in Frank’s list are reconciled. It is also worth noting that the third point, that consumption is less volatile than income over short periods, can be explained by the RIH because people are used to their current standard of living, so they will sustain it even through hard economic times.

So why, despite fitting the facts without manipulating them, did the RIH fall out of favour? Presumably, it made economists (particularly those of Friedman’s ilk) uncomfortable because of its implications that much consumption was unnecessary and wasteful, that redistributing income might spur consumption and therefore growth, and because it did not rest on innate individual preferences but on the behavior of society as a whole. The idea of a consumer rationally making inter-temporal consumption decisions in a vacuum was just, well, it was real economics. The result is that Friedman’s poorly supported hypothesis shot to fame, while Duesenberry’s well supported hypothesis was forgotten.

The NAIRU

NAIRU stands for ‘Non-Accelerating Inflation Rate of Unemployment’, and it implies that past a certain level of unemployment, workers will be able to demand wages so high that they will create a wage-price spiral. Hence, policy should aim for a ‘natural’ rate of unemployment, decided empirically by economists, in order to prevent the possibility of 1970s-style stagflation.

My first problem with the NAIRU is the way it is commonly seen as ‘overthrowing’ the naive post-war Keynesians who insisted on a simplistic trade off between inflation and unemployment. As I have previously noted, the originator of the curve, William Phillips, did not believe this; nor did Keynes; nor were the post-war Keynesians unaware of the possibility of a wage-price spiral. Furthermore, the NAIRU idea was really just a formalisation of a long standing conservative notion that we should keep some percentage of people unemployed for some reason. In this sense, the launch of the NAIRU was more a counter revolution of old ideas than a novel approach.

However, the real issue is whether the NAIRU is empirically relevant, and it seems it is not. First, as Jamie Galbraith has detailed, there is little evidence that unemployment has an accelerating effect on inflation at any level. Furthermore, empirical estimates of the NAIRU seem to move around so much, depending on the current rate of unemployment, that the idea has little in the way of predictive implications. The data simply do not generate a picture consistent with a clear value of unemployment at which inflation starts to accelerate;: we are far better off pursuing full employment while keeping numerous inflation-controlling mechanisms in place.

“OK” you say. “Perhaps the NAIRU does not exist. But what about this was disingenuous on Friedman’s part?” Well, the notion that the interplay between workers and employers is a key determinant of the rate of inflation flat out contradicts Friedman’s oft-repeated exclamation that “inflation is always and everywhere a monetary phenomenon”. If inflation is purely monetary, then the level of unemployment should not affect it at all! However, for whatever reason, Friedman was prepared to endorse both the NAIRU and his position on inflation simultaneously.

The Great Depression

Friedman’s Great Depression narrative was probably his biggest attempt to rehabilitate capitalism in a period where unregulated markets had fallen out of favour. He blamed the crash on the Federal Reserve for contracting the money supply in the face of a failing economy. This always struck me as strange – he was, in effect, arguing that the Great Depression was the fault of ‘the government’ because they failed to intervene sufficiently. This implies that the real source of the Great Depression came from somewhere other than the Federal Reserve, and therefore its sin was more one of omission than commission. Even if we accept the idea that the Great Depression was worsened by the action (or inaction) of central banks, Friedman is being disingenuous when he says that the Great Depression was “produced” by the government.

However, even Friedman’s own figures fail to support his hypothesis: according to Nicholas Kaldor, the figures show that the stock of high powered (base) money increased by 10% between 1929 and 1931. Peter Temin came to a similar conclusion: using the same time period as Kaldor, real money balances increased by 1-18% depending on which metric you use, and the overall money supply increased by 5%. Though base money contracted by about 2% at the onset of the crash, a contraction this small is a relatively common occurrence and not generally associated with depressions.

There is then the issue of causality. In many ways Friedman assumed what he wanted to prove: that the money supply is controlled by the central bank. Yet there are good reasons to doubt this, and believe that movements in income instead create a decrease in the money supply, which would absolve the central bank of responsibility. When economists such as Nicholas Kaldor pointed out this possibility, Friedman reached a new level of disingenuous (the first paragraph is Friedman’s comment; Kaldor responds in the second):

The reader can judge the weight of the casual empirical evidence for Britain since the second world war that Professor Kaldor offers in rebuttal by asking himself how Professor Kaldor would explain the existence of essentially the same relation between money and income for the U.K. after the second world war as before the first world war, for the U.K. as for the U.S., Yugoslavia, Greece, Israel, India, Japan, Korea, Chile and Brazil?

The simple answer to this is that Friedman’s assertions lack any factual foundation whatsoever. They have no basis in fact, and he seems to me to have invented them on the spur of the moment. I had the relevant figures extracted from the IMF statistics for 1958 and for each of the years 1968 to 1979, for every country mentioned by Friedman and a few others besides… Though there are some countries (among which the US is conspicuous) where in terms of the M3 the ratio has been fairly stable over the period of observation, this was not true of the majority of others.

Bottom line? Friedman had to assume his conclusion – that the money supply was in control of the Federal Reserve – in order to reach it. Yet, based on his own numbers, his conclusion was still false, as the money supply increased over the ‘crash’ period from 1929-1931. When Friedman was pushed on these matters, he simply made things up. However, lying hasn’t helped him escape the fact that his theory of the Great Depression is false.

Conclusion

Milton Friedman’s academic contributions do not stand up to scrutiny. Friedman seemed to be prepared to conjure up neat, ad hoc explanations for certain phenomena, simply asserting facts and leaving it for others to see if they were true or not, which they usually weren’t. He selectively interpreted his own data, exaggerating or plain misrepresenting it in order to make his point. Furthermore, his methods should be unsurprising given his incoherent methodology, which allowed him to dodge empirical evidence on the grounds of an ill-defined ‘predictive success’, something which sadly never materialised. In almost any other discipline, Friedman’s attempts at ‘science’ would have been laughed out of the room. Serious economists should distance themselves from both him and his contributions.

, , , , , ,

69 Comments

The Myth of Neutral Money

Conventional economic theory purports that money is neutral: that is, changes in the money supply do not affect the ‘real’ economy (patterns of trade, production and consumption). Instead, the only interaction between the monetary and the real economy is thought to be through the determination of nominal quantities such as prices, wages, exchange rates and so forth. Though a change in the quantity of money may create short term disruptions, the economy eventually will settle at the same long term equilibrium as before.

An extreme interpretation of the neutrality of money would lead to absurd conclusions, such as the idea that the ‘real’ economy would operate the same whether it had a gold standard or hyperinflation. I’d therefore interpret the ‘neutral money’ view as the claim that, at ‘normal’ levels of money, a change in the money supply will not alter the long term economic equilibrium. This viewpoint was described well by Milton Friedman in his famous ‘helicopter drop’ story, which I will use as the basis for my critique.

Friedman began his story by imagining a community in economic equilibrium:

Let us suppose, then, that one day a helicopter flies over our hypothetical long-stationary community and drops additional money from the sky equal to the amount already in circulation-say, $2,000 per representative individual who earns $20,000 a year in income. The money will, of course, be hastily collected by members of the community. Let us suppose further that everyone is convinced this event is unique and will never be repeated….

…People’s attempts to spend more than they receive will be frustrated, but in the process these attempts will bid up the nominal value of goods and services. The additional pieces of paper do not alter the basic conditions of the community. They make no additional productive capacity available. They alter no tastes….the final equilibrium will be a nominal income of $40,000 per representative individual instead of $20,000, with precisely the same flow of real goods and services as before.

It is first worth noting that the ‘real’ benchmark for Friedman’s equilibrium is somewhat hard to define – after all, the real economy is an artificial construct with no real world counterpart. Economic agents must necessarily negotiate with and act on the nominal: as John Maynard Keynes pointed out, workers do not have control over the general price level, and hence can only impact their nominal wages. Clearly, nobody has control over the ‘general price level’ (itself surely a problematic concept), so Keynes’ argument also applies to prices, exchange rates and other variables (sorry, economists, no ‘as if‘ arguments allowed). Nominal variables are actually observable, while proponents of money neutrality have no moneyless baseline by which they can judge real activity, despite repeatedly appealing to the idea.

More generally, I find the idea expressed by Friedman – that the economy will tend toward a stable, long term equilibrium, perhaps oscillating in the short term – is often used by economists, but is rarely fully justified. It is merely assumed that the economy will behave this way, and any erratic behaviour – such as money illusion, and sticky wages/prices – can be dismissed as short term ‘noise’. However, seems to me that such an idea can only be sustained by sweeping potential problems under the rug. Indeed, this supposed ‘noise’ (a) could be more relevant to understanding the system than the equilibrium and (b) could have a permanent impact on the economy and therefore equilibrium itself.

It is entirely possible – common, even – for a system’s behaviour to differ markedly from its equilibrium value(s). This is true even if the system has some tendency toward the equilibrium**. In examples such as Friedman’s, a monetary disturbance will surely alter people’s perceptions (something Friedman acknowledges), and they will engage in economic activity based on these altered perceptions, continually adjusting as they overshoot or undershoot their plans. Hence, a monetary shock could push the economy out of equilibrium and into a long term trajectory that has little relation to its initial position. Furthermore, if there are constant changes in the money supply, any tendency toward equilibrium will be continually thwarted. As Irving Fisher put it, “equilibrium is seldom reached and never long maintained”.

In fact, monetary disruptions can have even more fundamental effects than this. Due to path dependence, a monetary disturbance could change not only the immediate behaviour of the system but also the long term equilibrium itself. If money is invested based on people’s altered perceptions, long term capital goods can be created that otherwise would not have been. This phenomenon is all the more pronounced if new money is not evenly distributed but injected at specific points, something known as Cantillion effects. Friedman considers this possibility, but dismisses it without must justification (“during the transition some people will consume more, others less. But the ultimate position will be the same”. Erm, why?). The fact is that a company, individual or government who finds themselves with a relatively higher income due to a monetary injection could make important investments, altering long term patterns of production and consumption.

The role of finance

All of these effects would be important even in Friedman’s imaginary world. But it only becomes clear quite how important they are when we consider the nature of the modern banking system is particularly important, something entirely absent from Friedman’s example. This is because in neoclassical theory, banks are generally assumed to be ‘intermediaries’ who take money from Peter and loan it to Paul. The result is that banks only really ‘smooth things out’ by matching borrowers and lenders, and hence can be assumed away, perhaps save for one or two ‘frictions’ (transaction costs, interest rate mark ups). Effectively, we model the economy as if Peter is loaning directly to Paul, and from there we suppose that the nominal amount of money lent & borrowed is arbitrary, having no impact on Paul’s ‘real’ activity.

However, as has been comprehensively discussed in the blogosphere, this is not how banks work in the real world. Rather than taking money from Peter and loaning it to Paul, banks simply create a loan for Paul out of nothing. The ‘other side’ of the loan is not Peter’s deposit; it is a deposit that belongs to Paul, created at the same time as his loan, at an amount equal to the loan itself. At the moment the loan is issued, the money supply expands, and when the loan is repaid, the money supply will contract. Hence, the real economic activity Paul engages in is inextricably intertwined with the change in the money supply. The goods and services Paul buys, or the business he starts, or the assets whose price he bids up are a direct consequence of the same decision that expands the money supply. We cannot say that only prices will be affected, as the loan has a clear impact on production and consumption patterns in the real economy.

Furthermore, the constant extending and repaying of credit means the money supply is always expanding and contracting, with no discernible regularity. This is in stark contrast to the idea that the quantity of money simply moves from one long-term quantity to another, or increases at a constant rate. The idea of an underlying ‘real’ equilibrium simply becomes irrelevant when the nominal economy is constantly shifting like this, as irrelevant as discussing a surf board on calm waters if we want to understand its motion when it’s riding a wave.

Lastly, I previously noted that nominal variables are the variables which are actually observed and used in the real world, and nowhere is this more important than in the financial sector*. It is clear that by doubling the quantity of money in circulation, the relative value debts and assets would halve, which would have a big impact on the economy – imagine waking up to find your savings and mortgage were now worth half as much! Plans would be thwarted; firms, households and the government would find themselves in dramatically different financial situations: better or worse depending on whether they were a debtor or creditor. Bankruptcies and spending sprees would surely ensue. Likely, it would be a highly chaotic situation.

The constant interaction between the real and nominal – whether due to people’s perceptions, the financial sector, Cantillion effects, or what have you – means that they are impossible to separate. This leads me to question how useful the idea of real variables is, and whether theories should use nominal variables instead. This is especially important when trying to understand the role of assets and the financial sector – in fact, economist’s ‘real’ benchmark, and their adherence to the neutrality of money, which allowed them to gloss over the role of money and finance, surely helped blind them to the financial crisis. Perhaps further acknowledging the importance of money and the nominal could be a positive step forward for economic theory.

*I have seen people suggest that such variables should be made real to ‘correct’ the problem. Well, this was tried in Iceland, and it didn’t work. You simply cannot force the world to behave like theories; you have to do things the other way round.

**This is easy to show using difference or differential equations. Try, for example, plugging values into y(t+1) = y(t)*(1 – a*(y(t) – y’), where 0 < y < 1, a is some constant, and y’ is the equilibrium value of y. There is a negative feedback loop, yet depending on the value of a, and the initial values, the average can be far from y’ for long periods of time.

, , ,

118 Comments

In Praise of Econometrics

Economists often express incredulity toward people who target their criticisms at an amorphous entity called ‘economics’ (perhaps prefixed with ‘neoclassical’ or ‘mainstream’), instead of targeting specific areas of the discipline. They point out that, contrary to the popular view of economists as a group who are excessively concerned with theory, a majority of economic papers are empirical. Sometimes, even the discipline’s most vehement defenders are happy to disown the theoretical areas-such as macroeconomics-which attract the most criticism, whilst still insisting that, broadly speaking, economists are a scientifically minded bunch.

Perhaps surprisingly, I agree somewhat with this perspective. I think there is a disconnect within economics: between the core theories (neoclassical economics, or marginalism) and econometrics.* I believe the former to be logically, empirically and methodologically unsound. However, I believe the latter – though not without its problems – has all the hallmarks of a much better way to do ‘science’. There are several reasons to believe this:

First, econometrics has a far more careful approach to assumptions than marginalism. To start with, you are simply made more aware of the assumptions you use, whereas I find many are implicit in marginalist theory. Furthermore, there is extensive discussion of each individual assumption’s impact, of what happens when each assumption is relaxed, and of what we can do about it. For example: if your time-series data are not weakly stationary (loosely speaking, this means the data oscillates around the same average, with the size of the oscillations also staying, on average, roughly the same, like this) you simply cannot use Ordinary Least Squares (OLS) regression. There is no suggestion that, even though the assumption is false, we can use it as an approximation, or to highlight a key aspect of the problem, or other such hand waving. The method is simply invalidated, and we must use another method, or different data. Such an approach is refreshing and completely at odds with marginalist theory, whose proponents insist on clinging to models – and even applying them broadly – despite a wealth of absurdly unrealistic assumptions.

Second, econometrics has dealt with criticisms far better and more fundamentally than its theoretical counterpart. The most broad and pertinent criticism of econometrics was delivered by Edward Leamer in his classic paper ‘Let’s Take the Con Out of Econometrics’. Leamer highlighted the ‘identification problem’ inevitably faced by econometricians. Since econometricians try to isolate causal links, but can rarely do controlled experiments, they must pick and choose which variables they want to include in their model. Yet there are so many variables in the real world that we cannot discern, a priori, which ones are really the ‘key culprits’ in our purported causal chain, so inevitably this choice is something of a judgment call.

The result is that two different econometricians can use econometrics to paint two very different pictures, based on their choice of model. For example, David Hendry famously showed that the link between inflation and rainfall – whichever way it ran – was quite robust. Unfortunately, such absurdity can be much harder to detect in the murky waters of economic data, making purported causal links highly suspect. Leamer chastised his colleagues (and himself) for basing their choice of included variables and key assumptions on “whimsy”, making inference results highly subject to change based on the biases of the author, and which direction they (consciously or unconsciously) have pointed the data in. He pointed out that data on what exactly impacts murder rates could give wildly disparate results based on a few key decisions made by the practitioner.

However, the discipline has, in my opinion, taken the challenge seriously. In 2010, Joshua Angrist & Jörn-Steffen Pischke responded to Leamer, summing up some key changes in the way econometricians use and interpret data. I’ll briefly highlight a few of them:

(1) An increase in the use of data from quasi-randomised trials, whether intended or by ‘natural experiment’. Econometricians have increased the use of the former where they can, but real experiments are hard to come by in social sciences, so they are generally stuck with the latter. One such example of a natural experiment is the ‘differences-in-differences’ approach, which uses natural boundaries such as nation states to estimate whether certain variables are key causal factors. If the murder rate follows roughly the same trend in both the US and Canada, then the trend is surely not attributable to changes in policy. Such quasi-experiments eliminate the problem even more fundamentally than Leamer imagined it could be, by vastly improving the raw data.

(2) More common, careful use of methods intended to isolate causality, such as the use of Instrumental Variables (IV). The basic idea here is this: if we have an independent variable x, and a dependent variable y, correlation between them does not imply causation from x to y. So one way we can support the hypothesis of a causal link is by using another variable z, which influences x directly, but doesn’t influence y. In other words, z should only affect y through its influence on x, and if we find a correlation, this is consistent with the idea of a causal link.

To borrow an example from Wikipedia, consider smoking and health outcomes. We may find a correlation between smoking rates and worse health outcomes, and intuitively suppose that the causation runs from smoking. But ultimately, intuition isn’t enough. So we could use tobacco taxes – which surely affect health outcomes only because they influence smoking rates – as an instrument, and see if they are correlated with worse health outcomes. If they are, then this supports our initial hypothesis; if not, it may be an issue of reverse causation, or some third cause which impacts both smoking and health outcomes. IV and other methods like it are not exhaustive, but they certainly bring us closer to the truth, which is surely what science is about.

(3) More transparency in, and discussion of, research designs, so that results can be verified and others can (try to) replicate them. It is worth noting that, though Reinhart and Rogoff’s 90% threshold was junk science, they were exposed relatively soon after their data were made available.

The result of all these efforts is that econometrics is much more credible than it was when Leamer wrote his article in 1983 (at which time everyone seemed to agree it was fairly worthless). Hopefully it will continue to improve on this front.

A final, albeit less fundamental, reason I prefer econometrics and econometricians is that the nature of the field, with its numerous uncertainties, naturally demands a more modest interpretation of results. The rigid and hard-to-master framework of neoclassical theory often seems to give those who’ve mastered it the idea that they have been burdened with secret truths about the economy, which they are all too happy to parade on the op-ed pages of widely read papers. In contrast, you are unlikely to find Card & Krueger blithely asserting that the minimum wage has a positive effects on employment, and that anyone who disagrees with them just doesn’t understand econometrics. Perhaps this is just due to differences in the types of people that do theory versus those who do evidence, but I’d be willing to bet it is symptomatic of the generally more measured approach taken by econometricians.

The way forward?

I believe it would be a positive step for economists to opt for theoretical methods more resembling the econometric approach, preferring observed empirical regularities and basic statistical relationships to ‘rigorous’ theory. In fact, I have previously seen Steve Keen’s model referred to as ‘econometrics’, and perhaps this is broadly right in a sense. But it’s more of a compliment than an insult: ditching the straitjacket of marginalism, with its various restrictive assumptions (coupled with insistence that we simply can’t do it any other way), and heading for simple stock-flow relationships between various economic entities could well be a step forwards. It will of course seem like a step backwards to most economists, but then, highly complex models are not correct just because they are highly complex.

As for the Lucas Critique, well, statistical regularities that may collapse upon exploitation can be taken on a case-by-case basis: it’s actually not that difficult to foresee, and even the ‘Bastard-Keynesians’ saw it in the Phillips Curve (as did Keynes). Ironically, it seems economists themselves, blindly believing that they have ‘solved’ this problem, are least aware of it, having only a shallow interpretation of its implications (seemingly, as a gun that fires left). A more dynamic awareness of the relationship between policy and the economy would be a more progressive approach than being shackled by microfoundations.

I am half expecting my regular readers to point out 26723 problems with econometrics that I have not considered. To be sure, econometrics has problems: inferring causality will forever be an issue, as will the cumulative effects of the inevitable judgments calls involved in dealing with data. No doubt, econometrics is prone to misuse. However, it seems to me that most of the problems with econometrics are simply those experienced in all areas of statistics. This is at least a start: I would love, one day, to be able to say that the problems with economic theory were merely those experienced by all social sciences.

*Indeed, this blog would be more accurately titled ‘Unlearning Marginalism’, but obviously that wouldn’t be as catchy or irritating provocative.

, ,

26 Comments

Helping Economists Escape Economics

There are plenty of economists who will happily admit the limits of their discipline, and be nominally open to the idea of other theories. However, I find that when pushed on this, they reveal that they simply cannot think any other way than roughly along the lines of neoclassical economics. My hypothesis is that this is because economist’s approach has a ‘neat and tidy’ feel to it: people are ‘well-behaved’; markets tend to clear, people are, on average, right about things, and so forth. Therefore, economist’s immediate reaction to criticisms is “if not our approach, then what? It would be modelling anarchy!”

One such example of this argument is Chris Dillow, in his discussion of rationality in economics:

Now, economists have conventionally assumed rational behaviour. There’s a reason for this.Such an assumption generates testable predictions, whereas if we assume people are mad then anything goes.

However, as I and others have pointed out, people do not have two mindsets: ‘rational’, where they maximise utility, and ‘irrational’, where they go completely insane and chuck cats at people. People can behave somewhat predictably without being strictly ‘rational’, in economists sense of the word, and falsifiable predictions and clear policy prescriptions can be made based on this behaviour.

One example of this is Daniel Kahneman’s ‘Type 1′ versus ‘Type 2′ thinking. Type 1 thinking is basically the things you do without thinking: making a cup of tea, walking, breathing. People use a lot of mental shortcuts and heuristics with Type 1 thinking, helping to avoid lengthy calculations for everyday actions. Type 2 thinking, on the other hand, is the type of thinking one does when learning something new or solving a problem. It is far slower and more careful, and time consuming. Hence, it is saved only for things that are new and/or important.

So what are the implications of this? Well, there are many, but a major thing it helps to explain are implied contracts. Most purchases do not require one to sign a contract, and even when one is signed, who really has the time or expertise to read through the whole thing? So studying how people think – or don’t – when engaging in everyday transactions can help courts decide what exactly they have agreed to. In fact, the Type 1/Type 2 disparity highlights an opportunity for exploitation: a company with a large legal department who can draft the terms of doing business with them, using ‘Type 2′ thinking, has an obvious advantage over a customer who wants to get in and out and has many other things to think about. Such considerations could be highly relevant when deciding whether or not someone ‘agreed’ to certain addons when buying a credit card.

So the discussion of rationality versus irrationality is something of a red herring. Yet I expect economists will still question how we can model people’s economic behaviour if we don’t appeal to some semi-rational ordering of preferences. This mentality was reflected in my comments by an occasional sparring partner of mine, Luis Enrique:

Even if you tried to discard utility…you would end up implicitly appealing to some thing very similar to utility (which is just a convenient means of representing preferences) if you want to say anything about what people buy at what prices.

The issue here is that economists are predisposed to believe that we need to appeal to individual preferences to understand consumption. They will then assert that all utility really requires is that people have preferences and that they don’t order them nonsensically, and ask what exactly the problem with utility is.

However, as I have previously argued, utility does not only require that preferences are complete, transitive and so forth, but also that they are fixed: that is, individuals have a set of preferences that remain the same for at least long enough to be useful for analysis (and in some neoclassical models, preferences are the same for an agent’s entire lifespan). However, evidence suggests that individual preferences are highly volatile, differing across time and being highly dependent on situation. How exactly could something so hard to pin down be useful?

The truth is that most preferences are shaped by social conventions, by situations and by how they are presented to the consumer. What’s more, these things tend to stick around longer than individual preferences. Fashion is the most obvious example here: ultimately, this season’s trends are determined by a relatively small group of people in key companies, and consumers simply copy everyone else. In many ways the individual preference does not exist; it is created by circumstance and copied. If we want to understand fashion choices there is little to be gained from building a model around a utility maximising individual in a vacuum: we can simply look at trends and assume a certain proportion of people will follow them. In other words, like all of economics, the micro level needs macrofoundations.

The economist’s mentality extends up to the highest echelons of economics modelling, and culminates in the ‘DSGE or die’ approach, described well on Noah Smith’s blog by Roger Farmer:

If one takes the more normal use of disequilibrium to mean agents trading at non-Walrasian prices, … I do not think we should revisit that agenda. Just as in classical and new-Keynesian models where there is a unique equilibrium, the concept of disequilibrium in multiple equilibrium models is an irrelevant distraction.

This spurred a puzzled rebuttal from J W Mason:

The thing about the equilibrium approach, as Farmer presents it, isn’t just that it rules out the possibility of people being systematically wrong; it rules out the possibility that they disagree. This strikes me as a strong and importantly empirically false proposition.

When questioned about his approach, Farmer would probably suggest that if we do not assume markets tend to clear, and that agents are, on average, correct, then what exactly do we assume? A harsh evaluation would be to suggest this is really an argument from personal incredulity. There is simply no need to assume markets tend to clear to build a theory – John Maynard Keynes showed us as much in The General Theory, a book economists seem to have a hard time understanding precisely because it doesn’t fit their approach. Furthermore, the physical sciences have shown us that systems can be chaotic but model-able, and even follow recognisable paths.

A great, simple and testable disequilbirium theory was given to us by Hyman Minsky with his Financial Instability Hypothesis. His idea was that in relatively stable times, investors and firms will make good returns on their various ventures. Seeing these good returns, they will decide that in the next period, they will invest a little more; take a little more risk; borrow a little more money. As long as they generate returns, this process will continue and the average risk-taking will increase. Eventually – and inevitably – some investors will overextend themselves into debt-fuelled speculation, creating bubbles and crashes. Once this has settled everyone will be far more cautious and the whole thing will start again. Clearly, constructing a disequilibrium scenario is not intellectual anarchy: in fact, the actors in this scenario are behaving pretty rationally.

Ultimately, the only thing stopping economists exploring new ideas is economists. There is a wide breadth of non-equilibrium, non-market clearing and non-rational modelling going on. Economists have a stock of reasons that these are wrong: the Lucas Critique, Milton Friedman’s methodology, the ‘as if‘ argument and so forth. Yet they often fail to listen to the counterarguments to these points and simply use them to defer to their preferred approach. If economists really want to broaden the scope of the discipline rather than merely tweaking it around the edges, they must be prepared to understand how alternative approaches work, and why they can be valid. Otherwise they will continue to give the impression – right or wrong – of ivory tower intellectuals, completely out of touch with reality and closed off from new ideas.

, ,

64 Comments

Economists and the ‘As If’ Argument

Many economists will admit that their models are not, and do not resemble, the real world. Nevertheless, when pushed on this obvious problem, they will assert that reality behaves as if their theories are true. I’m not sure where this puts their theories in terms of falsifiability, but there you have it. The problem I want to highlight here is that, in many ways, the conditions in which economic assumptions are fulfilled are not interesting at all and therefore unworthy of study.

To illustrate this, consider Milton Friedman’s famous exposition of the as if argument. He used the analogy of a snooker player who does not know the geometry of the shots they make, but behaves in close approximation to how they would if they did make the appropriate calculations. We could therefore model the snooker player’s game by using such equations, even though this wouldn’t strictly describe the mechanics of the game.

There is an obvious problem with Friedman’s snooker player analogy: the only reason a snooker game is interesting (in the loosest sense of the word, to be sure) is that players play imperfectly. Were snooker players to calculate everything perfectly, there would be no game; the person who went first would pot every ball and win. Hence, the imperfections are what makes the game interesting, and we must examine the actual processes the player uses to make decisions if we want a realistic model of their play. Something similar could be said for social sciences. The only time someone’s – or society’s – behaviour is really interesting is when it is degenerative,  self destructive, irrational. If everyone followed utility functions and maximised their happiness making perfectly fungible trade offs between options on which they had all available information, there would be no economic problem to speak of. The ‘deviations’ are in many ways what makes the study of economics worthwhile.

I am not the first person to recognise the flaw in Friedman’s snooker player analogy. Paul Krugman makes a similar argument in his book Peddling Prosperity. He argues that tiny deviations from rationality – say, a family not bothering to maximise their expenditure after a small tax cut because it’s not worth the time and effort – can lead to massive deviations from an economic theory. The aforementioned example completely invalidates Ricardian Equivalence. Similarly, within standard economic theory, downward wage stickiness opens up a role for monetary and fiscal policy where before there was none.

If such small ‘deviations’ from the ‘ideal’ create such significant effects, what is to be said of other, more significant ‘deviations’? Ones such as how the banking system works; how firms price; behavioural quirks; the fact that marginal products cannot be well-defined; the fact that capital can move across borders, etc etc. These completely undermine the theories upon which economists base their proclamations against the minimum wage, or for NGDP targeting, or for free trade. (Fun homework: match up the policy prescriptions mentioned with the relevant faulty assumptions).

I’ll grant that a lot of contemporary economics involves investigating areas where an assumption – rationality, perfect information, homogeneous agents - is violated. But usually this is only done one at a time, preserving the other assumptions. However, if almost every assumption is always violated, and if each violation has surprisingly large consequences, then practically any theory which retains any of the faulty assumptions will be wildly off track. Consequently, I would suggest that rather than modelling one ‘friction’ at a time, the ‘ideal’ should be dropped completely. Theories could be built from basic empirical observations instead of false assumptions.

I’m actually not entirely happy with this argument, because it implies that the economy would behave ‘well’ if everyone behaved according to economist’s ideals. All too often this can mean economists end up disparaging real people for not conforming to their theories, as Giles Saint-Paul did in his defence of economics post-crisis. The fact is that even if the world did behave according to the (impossible) neoclassical ‘ideal’, there would still be problems, such as business cycles, due to emergent properties of individually optimal behaviour. In any case, economists should be wary of the as if argument even without accepting my crazy heterodox position.

The fact is that reality doesn’t behave ‘as if’ it is economic theory. Reality behaves how reality behaves, and science is supposed to be geared toward modelling this as closely as possible. Insofar as we might rest on a counterfactual, it is only intended when we don’t know how the system actually works. Once we do know how the system works – and in economics, we do, as I outlined above – economists who resist altering their long-outdated heuristics risk avoiding important questions about the economy.

, , ,

34 Comments

Follow

Get every new post delivered to your Inbox.

Join 844 other followers