Posts Tagged Criticisms of neoclassicism

The Crisis & Economics, Part 5: “Shhh! We’re Working On It”

This is part 5 in my series on how the financial crisis is relevant for economics (parts 1, 2, 3 & 4 are here). Each part explores an argument economists have made against the charge that the crisis exposed fundamental failings of their discipline. This post explores the possibility that macroeconomics, even if it failed before the crisis, has responded to its critics and is moving forward.

Argument #5: “We got this one wrong, sure, but we’ve made (or are making) progress in macroeconomics, so there’s no need for a fundamental rethink.”

Many macroeconomists deserve credit for their mea culpa and subsequent refocus following the financial crisis. Nevertheless, the nature of the rethink, particularly the unwillingness to abandon certain modelling techniques and ideas, leads me to question whether progress can be made without a more fundamental upheaval. To see why, it will help to have a brief overview of how macro models work.

In macroeconomic models, the optimisation of agents means that economic outcomes such as prices, quantities, wages and rents adjust to the conditions imposed by input parameters such as preferences, technology and demographics. A consequence of this is that sustained inefficiency, unemployment and other chaotic behaviour usually occur when something ‘gets in the way’ of this adjustment. Hence economists introduce ad hoc modifications such as sticky prices, shocks and transaction costs to generate sub-optimal behaviour: for example, if a firm’s cost of changing prices exceeds the benefit, prices will not be changed and the outcome will not be Pareto efficient. Since there are countless ways in which the world ‘deviates’ from the perfectly competitive baseline, it’s mathematically troublesome (or impossible) to include every possible friction. The result is that macroeconomists tend to decide which frictions are important based on real world experience: since the crisis, the focus has been on finance. On the surface this sounds fine – who isn’t for informing our models with experience? However, it is my contention that this approach does not offer us any more understanding than would experience alone.

Perhaps an analogy will illustrate this better. I was once walking past a field of cows as it began to rain, and I noticed some of them start to sit down. It occurred to me that there was no use them doing this after the storm started; they are supposed to give us adequate warning by sitting down before it happens. Sitting down during a storm is just telling us what we already know. Similarly, although the models used by economists and policy makers did not predict and could not account for the crisis before it happened, they have since built models that try to do so. They generally do this by attributing the crisis to frictions that revealed themselves to be important during the crisis. Ex post, a friction can always be found to make models behave a certain way, but the models do not make identifying the source of problems before they happen any easier, and they don’t add much afterwards, either – we certainly didn’t need economists to tell us finance was important following 2008. In other words, when a storm comes, macroeconomists promptly sit down and declare that they’ve solved the problem of understanding storms.  It becomes difficult to escape the circularity of defining the relevant friction by its outcome, hence stripping the idea of ‘frictions’ of predictive power or falsifiability.

There is also the open question of whether understanding the impact of a ‘friction’ relative to a perfectly competitive baseline entails understanding its impact in the real world. As theorists from Joe Stiglitz to Yanis Varoufakis have argued, neoclassical economics is trapped in a permanent fight against indeterminacy: the quest to understand things relative to a perfectly competitive, microfounded baseline leads to aggregation problems and intractable complexities that, if included, result in “anything goes” conclusions. To put in another way, the real world is so complex and full of frictions that whichever mechanics would be driving the perfectly competitive model are swamped. The actions of individual agents are so intertwined that their aggregate behaviour cannot be predicted from each of their ‘objective functions’. Subsequently, our knowledge of the real world must be informed by either models which use different methodologies or, more crucially, by historical experience.

Finally, the ad hoc approach also contradicts another key aspect of contemporary macroeconomics: microfoundations. The typical justification for these is that, to use the words of the ECB, they impose “theoretical discipline” and are “less subject to the Lucas critique” than a simple VAR, Old Keynesian model or another more aggregative framework. Yet even if we take those propositions to be true, the modifications and frictions that are so crucial to making the models more realistic are often not microfounded, sometimes taking the form of entirely arbitrary, exogenous constraints. Even worse is when the mechanism is profoundly unrealistic, such as prices being sticky because firms are randomly unable to change them for some reason. In other words, macroeconomics starts by sacrificing realism in the name of rigour, but reality forces it in the opposite direction, and the end result is that it has neither.

Macroeconomists may well defend their approach as just a ‘story telling‘ approach, from which they can draw lessons but which isn’t meant to hold in the same manner as engineering theory. Perhaps this is defensible in itself, but (a) personally, I’d hope for better and (b) in practice, this seems to mean each economists can pick and choose whichever story they want to tell based on their prior political beliefs. If macroeconomists are content conversing in mathematical fables, they should keep these conversations to themselves and refrain from forecasting or using them to inform policy. Until then, I’ll rely on macroeconomic frameworks which are less mathematically ‘sophisticated’, but which generate ex ante predictions that cover a wide range of observations, and which do not rely on the invocation of special frictions to explain persistent deviations from these predictions.

, , , ,

12 Comments

The Illusion of Mathematical Certainty

Nate Silver’s questionable foray into predicting World Cup results got me thinking about the limitations of maths in economics (and the social sciences in general). I generally stay out of this discussion because it’s completely overdone, but I’d like to rebut a popular defence of mathematics in economics that I don’t often see challenged. It goes something like this:

Everyone has assumptions implicit in the way they view the world. Mathematics allows economists to state our assumptions clearly and make sure our conclusions follow from our premises so we can avoid fuzzy thinking.

I do not believe this argument stands on its own terms. A fuzzy concept does not become any less fuzzy when you attach an algebraic label to it and stick it into an equation with other fuzzy concepts to which you’ve attached algebraic labels (a commenter on Noah Smith’s blog provided a great example of this by mathematising Freud’s Oedipus complex and pointing out it was still nonsense). Similarly, absurd assumptions do not become any less absurd when they are stated clearly and transparently, and especially not when any actual criticism of these assumptions is brushed off the grounds that “all models are simplifications“.

Furthermore, I’m not convinced that using mathematics actually brings implicit assumptions out into the open. I can’t count the amount of times that I’ve seen people invoke demand-supply without understanding that it is built on the assumption of perfect competition (and refusing to acknowledge this point when challenged). The social world is inescapably complex, so there are an overwhelming variety of assumptions built into any type of model, theory or argument that tries to understand it. These assumptions generally remain unstated until somebody who is thinking about an issue – with or without mathematics – comes along and points out their importance.

For example, consider Michael Sandel’s point that economic theory assumes the value or characteristics of commodities are independent of their price and sale, and once you realise this is unrealistic (for example with sex), you come to different conclusions about markets. Or Robert Prasch’s point that economic theory assumes there is a price at which all commodities will be preferred to one another, which implies that at some price you’d substitute beer for your dying sister’s healthcare*. Or William Lazonick’s point that economic theory presumes labour productivity to be innate and transferable, whereas many organisations these days benefit from moulding their employees’ skills to be organisation specific. I could go on, but the point is that economic theory remains full of implicit assumptions. Understanding and modifying these is a neverending battle that mathematics does not come close to solving.

Let me stress that I am not arguing against the use of mathematics; I’m arguing against using gratuitous, bad mathematics as a substitute for interesting and relevant thinking. If we wish to use mathematics properly, it is not enough to express properties algebraically; we have to define the units in which these properties are measured. No matter how logical mathematics makes your theory appear, if the properties of key parameters are poorly defined, they will not balance mathematically and the theory will be logical nonsense. Furthermore, it has to be demonstrated that the maths is used to come to new, falsifiable conclusions, rather than rationalising things we already know. Finally, it should never be presumed that stating a theory mathematically somehow guards that theory against fuzzy thinking, poor logic or unstated assumptions. There is no reason to believe it is a priori desirable to use mathematics to state a theory or explore an issue, as some economists seem to think.

*This has a name in economics: the axiom of gross substitution. However, it often goes unstated or at least underexplored: for example, these two popular microeconomics texts do not mention it all.

, , ,

21 Comments

The Crisis & Economics, Part 1: The Boom & The Bust

For critics of mainstream economics, the 2008 financial crisis represents the final nail in the coffin for a paradigm that should have died decades ago. Not only did economists fail to see it coming, they can’t agree on how to get past it and they have yet to produce a model that can understand it fully. On the other hand, economists tend to see things quite differently – in my experience, your average economist will concede that although the crisis is a challenge, it’s a challenge that has limited implications for the field as a whole. Some go even further and argue that it is all but irrelevant, whether due to progress being made in the field or because the crisis represents a fundamentally unforeseeable event in a complex world.

I have been compiling the most common lines used to defend economic theory after the crisis, and will consider each of them in turn in a series of 7 short posts (it was originally going to be one long post, but it got too long).  I’ve started with what I consider the weakest argument, with the quality increasing as the series goes on. Hopefully this will be a useful resource to further debate and prevent heterodox and mainstream economists (and the public) talking past each other. Let me note that I do not intend these arguments as simple ‘rebuttals’ of every point (though it is of some, especially the weaker ones), but as a cumulative critique. Neither am I accusing all economists of endorsing all of the arguments presented here (especially the weaker ones).

Argument #1: “We did a great job in the boom!”

I’ve seen this argument floating around, and it actually takes two forms. The first, most infamously used by Alan Greenspan – and subsequently mocked by bloggers – is a political defense of boom-bust, or even capitalism itself: the crisis, and others like it, are just noise around a general trend of progression, and we should be thankful for this progression instead of focusing on such minor hiccups. The second form is more of a defence of economic theory: since the theory does a good job of explaining/predicting the boom periods, which apply most of the time, it’s at least partially absolved of failing to ‘predict’ the behaviour of the economy. Both forms of the argument suffer from the same problems.

First, something which is expected to do a certain job – whether it’s an economic system or the economists who study it – is expected to do this job all the time. If an engineer designs a bridge, you don’t expect it to stand up most of the time. If your partner promises to be faithful, you don’t expect them to do so most of the time. If your stock broker promises to make money but loses it after an asset bubble bursts, you won’t be comforted by the fact that they were making money before the bubble burst. And if an economic system, or set of policies, promise to deliver stability, employment and growth, then the fact that it fails to do so every 7 years means that it is not achieving its stated objectives. In other words, the “invisible hand” cannot be acquitted of the charge of failing to do its job by arguing it only fails to do its job every so often.

Second, the argument implies there was no causal link between the boom and the bust, so the stable period can be understood as separate from the unstable period. Yet if the boom and the bust are caused by the same process, then understanding one entails understanding the other. In this case, the same webs of credit which fuelled the boom created enormous problems once the bubble burst and people found their incomes scarce relative to their accumulated debts. Models which failed to spot this process in its first phase inevitably missed (and misdiagnosed) the second phase. As above, the job of macroeconomic models is to understand the economy, which entails understanding it at all times, not just when nothing is going wrong – which is when we need them least.

As a final note, I can’t help but wonder if this argument, even in its general political form, has roots in economic theory. Economic models (such as the Solow Growth Model) often treat the boom as the ‘underlying’ trend, buffeted only by exogenous shocks or slowed/stopped by frictions. A lot of the major macroeconomic frameworks (such as Infinite Horizons or Overlapping Generations models) have two main possibilities: a steady-state equilibrium path, or complete breakdown. In other words, either things are going well or they aren’t – and if they aren’t, it’s usually because of an easily identifiable mechanism, one which constitutes a “notably rare exception” to the underlying mechanics of the model. Such a mentality implies problems, including recessions, are not of major analytical interest, or are at least easily diagnosed and remedied by a well-targeted policy. Subsequently, those versed in economic theory may have trouble envisaging a more complex process, whereby a seemingly tranquil period can contain the seeds of its own demise. This causes a mental separation of the boom and the bust periods, resulting in a failure to deal with either.

The next instalment in the series will be part 2: the EMH-twist

, ,

31 Comments

Pieria: How Not to Do Macroeconomics, Part II

I have  new post on Pieria, following up on mainstream macro and secular stagnation. The beginning is a restatement of my critique of EM/a response to Simon Wren-Lewis, but the main nub of the post is (hopefully) a more constructive effort at macroeconomics, from a heterodox perspective:

There are two major heterodox theories which help to understand both the 2008 crisis and the so-called period of ‘secular stagnation’ before and after it happened: Karl Marx’s Tendency of the Rate of Profit to Fall (TRPF), and Hyman Minsky’s Financial Instability Hypothesis (FIH)I expect that neither of these would qualify as ‘precise’ or ‘rigorous’ enough for mainstream economists – and I’ve no doubt the mere mention of Marx will have some reaching for the Black Book of Communism – but the models are relatively simple, offer an understanding of key mechanisms and also make empirically testable predictions. What’s more, they do not merely isolate abstract mechanisms, but form a general explanation of the trends in the global economy over the past few decades (both individually, but even moreso when combined). Marx’s declining RoP serves as a material underpinning for why secular stagnation and financialisation get started, while Minsky’s FIH offers an excellent description of how they evolve.

I have two points that I wanted to add, but thought they would clog up the main post:

First, in my previous post, I referenced Stock-Flow Consistent models as one promising future avenue for fully-fledged macroeconomic modelling, a successor to DSGE. Other candidates might include Agent-Based Modelling, models in econophysics or Steve Keen’s systems dynamics approach. However, let me say that – as far as I’m aware – none of these approaches yet reach the kind of level I’m asking of them. I endorse them on the basis that they have more realistic foundations, and have had fewer intellectual resources poured into them than macroeconomic models, so they warrant further exploration. But for now, I believe macroeconomics should walk before it can run: clearly stated, falsifiable theories, which lean on maths where needed but do not insist on using it no matter what, are better than elaborate, precisely stated theories which are so abstract it’s hard to determine how they are relevant at all, let alone falsify them.

Second, these are just two examples, coloured no doubt by my affiliation with what you might call left-heterodox schools of thought. However, I’m sure Austrian economics is quite compatible with the idea of secular stagnation, since their theory centres around how credit expansion and/or low interest rates cause a misallocation of investment, resulting in unsustainable bubbles. I leave it to those more knowledgeable about Austrian economics than me to explore this in detail.

 

, , , , , ,

4 Comments

How Not to Do Macroeconomics

A frustrating recurrence for critics of ‘mainstream’ economics is the assertion that they are criticising the economics of bygone days: that those phenomena which they assert economists do not consider are, in fact, at the forefront of economics research, and that the critics’ ignorance demonstrates that they are out of touch with modern economics – and therefore not fit to criticise it at all.

Nowhere is this more apparent than with macroeconomics. Macroeconomists are commonly accused of failing to incorporate dynamics in the financial sector such as debt, bubbles and even banks themselves, but while this was true pre-crisis, many contemporary macroeconomic models do attempt to include such things. Reputed economist Thomas Sargent charged that such criticisms “reflect either woeful ignorance or intentional disregard for what much of modern macroeconomics is about and what it has accomplished.” So what has it accomplished? One attempt to model the ongoing crisis using modern macro is this recent paper by Gauti Eggertsson & Neil Mehrotra, which tries to understand secular stagnation within a typical ‘overlapping generations’ framework. It’s quite a simple model, deliberately so, but it helps to illustrate the troubles faced by contemporary macroeconomics.

The model

The model has only 3 types of agents: young, middle-aged and old. The young borrow from the middle, who receive an income, some of which they save for old age. Predictably, the model employs all the standard techniques that heterodox economists love to hate, such as utility maximisation and perfect foresight. However, the interesting mechanics here are not in these; instead, what concerns me is the way ‘secular stagnation’ itself is introduced. In the model, the limit to how much young agents are allowed to borrow is exogenously imposed, and deleveraging/a financial crisis begins when this amount falls for unspecified reasons. In other words, in order to analyse deleveraging, Eggertson & Mehrotra simply assume that it happens, without asking why. As David Beckworth noted on twitter, this is simply assuming what you want to prove. (They go on to show similar effects can occur due to a fall in population growth or an increase in inequality, but again, these changes are modelled as exogenous).

It gets worse. Recall that the idea of secular stagnation is, at heart, a story about how over the last few decades we have not been able to create enough demand with ‘real’ investment, and have subsequently relied on speculative bubbles to push demand to an acceptable level. This was certainly the angle from which Larry Summers and subsequent commentators approached the issue. It’s therefore surprising – ridiculous, in fact – that this model of secular stagnation doesn’t include banks, and has only one financial instrument: a risk-less bond that agents use to transfer wealth between generations. What’s more, as the authors state, “no aggregate savings is possible (i.e. there is no capital)”. Yes, you read that right. How on earth can our model understand why there is not enough ‘traditional’ investment (i.e. capital formation), and why we need bubbles to fill that gap, if we can have neither investment nor bubbles?

Naturally, none of these shortcomings stop Eggertson & Mehrotra from proceeding, and ending the paper in economists’ favourite way…policy prescriptions! Yes, despite the fact that this model is not only unrealistic but quite clearly unfit for purpose on its own terms, and despite the fact that it has yielded no falsifiable predictions (?), the authors go on give policy advice about redistribution, monetary and fiscal policy. Considering this paper is incomprehensible to most of the public, one is forced to wonder to whom this policy advice is accountable. Note that I am not implying policymakers are puppets on the strings of macroeconomists, but things like this definitely contribute to debate – after all, secular stagnation was referenced by the Chancellor in UK parliament (though admittedly he did reject it). Furthermore, when you have economists with a platform like Paul Krugman endorsing the model, it’s hard to argue that it couldn’t have at least some degree of influence on policy-makers.

Now, I don’t want to make general comments solely on the basis of this paper: after all, the authors themselves admit it is only a starting point. However, some of the problems I’ve highlighted here are not uncommon in macro: a small number of agents on whom some rather arbitrary assumptions are imposed to create loosely realistic mechanics, an unexplained ‘shock’ used to create a crisis. This is true of the earlier, similar paper by Eggertson & Krugman, which tries to model debt-deflation using two types of agents: ‘patient’ agents, who save, and ‘impatient agents’, who borrow. Once more, deleveraging begins when the exogenously imposed constraint on the patient agent’s borrowing falls For Some Reason, and differences in the agents’ respective consumption levels reduce aggregate demand as the debt is paid back. Again, there are no banks, no investment and no real financial sector. Similarly, even the far more sophisticated Markus K. Brunnermeier & Yuliy Sannikov – which actually includes investment and a financial sector – still only has two agents, and relies on exogenous shocks to drive the economy away from its steady-state.

Whither macroeconomics?

Why do so many models seem to share these characteristics? Well, perhaps thanks to the Lucas Critique, macroeconomic models must be built up from optimising agents. Since modelling human behaviour is inconceivably complex, mathematical tractability forces economists to make important parameters exogenous, and to limit the number (or number of types) of agents in the model, as well as these agents’ goals & motivations. Complicated utility functions which allow for fairly common properties like relative status effects, or different levels of risk aversion at different incomes, may be possible to explore in isolation, but they’re not generalisable to every case or the models become impossible to solve/indeterminate. The result is that a model which tries to explore something like secular stagnation can end up being highly stylised, to the point of missing the most important mechanics altogether. It will also be unable to incorporate other well-known developments from elsewhere in the field.

This is why I’d prefer something like Stock-Flow Consistent models, which focus on accounting relations and flows of funds, to be the norm in macroeconomics. As economists know all too well, all models abstract from some things, and when we are talking about big, systemic problems, it’s not particularly important whether Maria’s level of consumption is satisfying a utility function. What’s important is how money and resources move around: where they come from, and how they are split – on aggregate – between investment, consumption, financial speculation and so forth. This type of methodology can help understand how the financial sector might create bubbles; or why deficits grow and shrink; or how government expenditure impacts investment. What’s more, it will help us understand all of these aspects of the economy at the same time. We will not have an overwhelming number of models, each highlighting one particular mechanic, with no ex ante way of selecting between them, but one or a small number of generalisable models which can account for a large number of important phenomena.

Finally, to return to the opening paragraph, this paper may help to illustrate a lesson for both economists and their critics. The problem is not that economists are not aware of or never try to model issue x, y or z. Instead, it’s that when they do consider x, y or z, they do so in an inappropriate way, shoehorning problems into a reductionist, marginalist framework, and likely making some of the most important working parts exogenous. For example, while critics might charge that economists ignore mark-up pricing, the real problem is that when economists do include mark-up pricing, the mark-up is over marginal rather than average cost, which is not what firms actually do. While critics might charge that economists pay insufficient attention to institutions, a more accurate critique is that when economists include institutions, they are generally considered as exogenous costs or constraints, without any two-way interaction between agents and institutions. While it’s unfair to say economists have not done work that relaxes rational expectations, the way they do so still leaves agents pretty damn rational by most peoples’ standards. And so on.

However, the specific examples are not important. It seems increasingly clear that economists’ methodology, while it is at least superficially capable of including everything from behavioural economics to culture to finance, severely limits their ability to engage with certain types of questions. If you want to understand the impact of a small labour market reform, or how auctions work, or design a new market, existing economic theory (and econometrics) is the place to go. On the other hand, if you want to understand development, historical analysis has a lot more to offer than abstract theory. If you want to understand how firms work, you’re better off with survey evidence and case studies (in fairness, economists themselves have been moving some way in this direction with Industrial Organisation, although if you ask me oligopoly theory has many of the same problems as macro) than marginalism. And if you want to understand macroeconomics and finance, you have to abandon the obsession with individual agents and zoom out to look at the bigger picture. Otherwise you’ll just end up with an extremely narrow model that proves little except its own existence.

 

, , , ,

25 Comments

Yes, The Cambridge Capital Controversies Matter

I rarely (never) post based solely on a quick thought or quote, but this just struck me as too good not to highlight. It’s from a book called ‘Capital as Power’ by Jonathan Nitzan and Shimshon Bichler, which challenges both the neoclassical and Marxian conceptions of capital, and is freely available online. The passage in question pertains to the way neoclassical economics has dealt with the problems highlighted during the well documented Cambridge Capital Controversies:

The first and most common solution has been to gloss the problem over – or, better still, to ignore it altogether. And as Robinson (1971) predicted and Hodgson (1997) confirmed, so far this solution seems to be working. Most economics textbooks, including the endless editions of Samuelson, Inc., continue to ‘measure’ capital as if the Cambridge Controversy had never happened, helping keep the majority of economists – teachers and students – blissfully unaware of the whole debacle.

A second, more subtle method has been to argue that the problem of quantifying capital, although serious in principle, has limited practical importance (Ferguson 1969). However, given the excessively unrealistic if not impossible assumptions of neoclassical theory, resting its defence on real-world relevance seems somewhat audacious.

The second point is something I independently noticed: appealing to practicality when it suits the modeller, but insisting it doesn’t matter elsewhere. If there is solid evidence that reswitching isn’t important, that’s fine, but then we should also take on board that agents don’t optimise, markets don’t clear, expectations aren’t rational, etc. etc. If we do that, pretty soon the assumptions all fall away and not much is left.

However, it’s the authors’ third point that really hits home:

The third and probably most sophisticated response has been to embrace disaggregate general equilibrium models. The latter models try to describe – conceptually, that is – every aspect of the economic system, down to the smallest detail. The production function in such models separately specifies each individual input, however tiny, so the need to aggregate capital goods into capital does not arise in the first place.

General equilibrium models have serious theoretical and empirical weaknesses whose details have attracted much attention. Their most important problem, though, comes not from what they try to explain, but from what they ignore, namely capital. Their emphasis on disaggregation, regardless of its epistemological feasibility, is an ontological fallacy. The social process takes place not at the level of atoms or strings, but of social institutions and organizations. And so, although the ‘shell’ called capital may or may not consist of individual physical inputs, its existence and significance as the central social aggregate of capitalism is hardly in doubt. By ignoring this pivotal concept, general equilibrium theory turns itself into a hollow formality.

In essence, neoclassical economics dealt with its inability to model capital by…eschewing any analysis of capital. However, the theoretical importance of capital for understanding capitalism (duh) means that this has turned neoclassical ‘theory’ into a highly inadequate took for doing what theory is supposed to do, which is to further our understanding.

Apparently, if you keep evading logical, methodological and empirical problems, it catches up with you! Who knew?

, , ,

63 Comments

18 Signs Economists Haven’t the Foggiest

I’d like to thank Chris Auld for giving me a format for outlining the major reasons why economists can be completely out of touch with their public image, as well as how they should do “science”, and why their discipline is so ripe for criticism (most of which they are unaware of). So, here are 18 common failings I encounter time and time again in my discussions with mainstream economists:

1. They defer to the idea that “all models are simplifications” as if this somehow creates a fireguard against any criticism of methodology, internal inconsistency or empirical relevance.

2. They argue that the financial crisis is irrelevant to their discipline (bonus: also that predicting such events is impossible).

3. They think that behavioural, new institutional and even ‘Keynesian’ economics show the discipline is pluralistic, not neoclassical.

4. They think that the fact most economic papers are “empirical” shows economists are engaging in the scientific method.

5. They think ‘neoclassical economics‘ doesn’t exist and is just a swear word used by their opponents.

6. When pushed, they collapse their theories and assumptions into ridiculously weak, virtually unfalsifiable claims (such as revealed preference, the Efficient Markets Hypothesis, or rationality).

7. They dismiss ideas from the past or comprehensive study of previous thinkers and texts as “not science”.

8. They think positive and normative economics are 100% separable, and their discipline is “value free“.

9. They simply cannot think of any other approach to ‘economics’ than theirs.

10. They believe in an erroneous history that sits well with their pet theories, such as the myths of barter and free trade.

11. They think that microfoundations are a necessary and sufficient modelling technique for dealing with the Lucas Critique.

12. They think economics is separable from politics, and that the political role and application of economic ideas in the real world is irrelevant for academic discussion (examples: Friedman and Pinochet, central bank independence).

13. They think their discipline is going through a calm, fruitful period (based on their self-absorbed bubble).

14. They think that endorsing cap & trade or carbon taxes is “dealing with the environment”.

15. They think making an unrealistic model consistent with one or two observed phenomena makes it sound or worthwhile (DSGE and other models are characterised by this “frictions” approach).

16. They think their discipline is an adequate, even superior, method for analysing problems in other social sciences such as politics, history and sociology.

17. They think that the world behaves as if their assumptions are true (or close enough).

18. They think that their discipline’s use of mathematics shows that it is “rigorous” and scientific.

Every above link that is not written by an economist is recommended. Furthermore, here are some related recommendations: seven principles for arguing with economists; my FAQ for mainstream economists; I Could Be Arguing In My Spare Time (footnotes!); What’s Wrong With Economics? Also try both mine and Matthjus Krul’s posts on how not to criticise neoclassical economics. As I say to Auld in the comments, I actually agree with some of his points about the mistakes critics make. But I think these critics are still criticising economics for good reasons, and that economists need to improve on the above if they want anyone other than each other to continue taking them seriously.

PS If you think I haven’t backed up any of my claims about what economists say, try cross referencing, as some of the links fall into more than one trap. Also follow through to who I’m criticising in the links to my previous posts. And no, I don’t think all economists believe everything here. However, I do think many economists believe some combination of these things.

Addendum: I have received predictable complaints that my examples are straw men, or at least uncommon. Obviously I provided links for each specific claim – if you’d like to charge that said link is not relevant, please explain why, and if you want more, I’m happy to provide them. However, my general claim is simply that a given article trying to expound or defend mainstream economics will commit a handful of these errors, perhaps excluding the more specific ones such as history or carbon taxes. Here are some examples to show how pervasive this mindset is:

Auld’s original article commits 2, 3, 4, 5 & 12.

This recent, popular defense of economics as a science in the NYT commits 2, 4, 8 & 13 (NB: I forgot “makes annoying and inappropriate comparisons to other sciences”, although both sides do this).

Greg Mankiw’s response to the econ101 walkout commits 8, 9, 12 & 13.

This recent ‘critique‘ of Debunking Economics commits 9, 11, 15 (though, to its credit, it avoids 2).

Stephen Williamson manages 2, 6, 7, 8, 9, 11, 12, 13, 15 & 16 in his reviews of John Quiggin’s Zombie Economics (in fact, Williamson is a fantastic source of this stuff in general).

Paul Krugman committed 1, 7, 9 & 15 in his debate with Steve Keen

Dani Rodrik, who is probably the most reasonable mainstream economist in the world, committed 3, 4, 13 & 15 in his discussion of economics.

and so on…

(Note that, in the interest of fairness, I have left out the most ridiculous things I’ve seen since the crisis.)

,

152 Comments

How Economics Sees Reality

Something has been bothering me about the way evidence is (sometimes) used in economics and econometrics:  theories are assumed throughout interpretation of the data. The result is that it’s hard to end up questioning the model being used.

Let me give some examples. The delightful fellas at econjobrumours once disputed my argument that supply curves are flat or slope downward by noting that, yes, Virginia, in conditions where firms have market power (high demand, drought pricing) prices tend to go up. Apparently this “simple, empirical point” suffices to refute the idea that supply curves do anything but slope upward. But this is not true. After all, “supply curves slope downward/upward/wiggle around all over the place” is not an empirical statement. It is an interpretation of empirical evidence that also hinges on the relevance of the theoretical concept of the supply curve itself. In fact, the evidence, taken as whole, actually suggests that the demand-supply framework is at best incomplete.

This is because we have two major pieces of evidence on this matter: higher demand/more market power increases price, and firms face constant or increasing returns to scale. These are contradictory when interpreted within the demand-supply framework, as they imply that the supply curve slopes in different directions. However, if we used a different model – say, added a third term for ‘market power’, or a Kaleckian cost plus model, where the mark up was a function of the “degree of monopoly”, that would no longer be the case. The rising supply curve rests on the idea that increasing prices reflect increasing costs, and therefore cannot incorporate these possibilities.

Similarly, many empirical econometric papers use the neoclassical production function, (recent one here) which states that output is derived from the labour and capital, plus a few parameters attached to the variables, as a way to interpret the data. However, this again requires that we assume capital and labour, and the parameters attached to them, are meaningful, and that the data reflect their properties rather than something else. For example, the volume of labour employed moving a certain way only implies something about the ‘elasticity of substitution’ (the rate at which firms substitute between labour and capital) if you assume that there is an elasticity of substitution. However, the real-world ‘lumpiness‘ of production may mean this is not the case, at least not in the smooth, differentiable way assumed by neoclassical theory.

Assuming such concepts when looking at data means that economics can become a game of ‘label the residual‘, despite the various problems associated with the variables, concepts and parameters used. Indeed, Anwar Shaikh once pointed out that the seeming consistency between the Cobb-Douglas production function and the data was essentially tautological, and so using the function to interpret any data, even the word “humbug” on a graph, would seem to confirm the propositions of the theory, simply because they follow directly from the way it is set up.

Joan Robinson made this basic point, albeit more strongly, concerning utility functions: we assume people are optimising utility, then fit whatever behaviour we observe into said utility function. In other words, we risk making the entire exercise “impregnably circular” (unless we extract some falsifiable propositions from it, that is). Frances Wooley’s admittedly self-indulgent playing around with utility functions and the concept of paternalism seems to demonstrate this point nicely.

Now, this problem is, to a certain extent, observed in all sciences – we must assume ‘mass’ is a meaningful concept to use Newton’s Laws, and so forth. However, in economics, properties are much harder to pin down, and so it seems to me that we must be more careful when making statements about them. Plus, in the murky world of statistics, we can lose sight of the fact that we are merely making tautological statements or running into problems of causality.

The economist might now ask how we would even begin to interpret the medley of data at our disposal without theory. Well, to make another tired science analogy, the advancement of science has often not resulted from superior ‘predictions’, but on identifying a closer representation of how the world works: the go-to example of this is Ptolemy, which made superior predictions to its rival but was still wrong. My answer is therefore the same as it has always been: economists need to make better use of case studies and experiments. If we find out what’s actually going on underneath the data, we can use this to establish causal connections before interpreting it. This way, we can avoid problems of circularity, tautologies, and of trapping ourselves within a particular model.

, , , ,

38 Comments

On Pieria: What’s Wrong With Economics?

My latest article, trying to sum up the problems with economist’s approach – in 3 words, “it’s too narrow”:

The question of whether mainstream (neoclassical) economics as a discipline is fit for purpose is well-trodden ground…

….[I think] economic theory is flawed, not necessarily because it is simply ‘wrong’, but because it is based on quite a rigid core framework that can restrict economists and blind them to certain problems. In my opinion, neoclassical economics has useful insights and appropriate applications, but it is not the only worthwhile framework out there, and economist’s toolkit is massively incomplete as long as they shy away from alternative economic theories, as well as relevant political and moral questions.

As Yanis Varoufakis noted, it is strange how remarkably resilient the neoclassical framework is in the presence of many coherent alternatives and a large number of empirical/logical problems. However, I actually think this is quite normal in science – after all, it is done by humans, not robots. Hopefully things will change eventually and economics will become more comprehensive/pluralistic, as I call for in the article.

It’s good to sum up my overall position, but I think I’ll probably lean more (though not entirely) towards positive approaches from now on, some of which I mention in the article. Though I strongly disagree with Jonathan Catalan that heterodox economists are “more often wrong than right”, I agree with his sentiment that it’s probably better to “sell [one's] ideas” that to endlessly repeat oneself about methodology and so forth. So maybe expect a shift from general criticisms of economics to more positive and targeted approaches!

PS Having said that, my next post definitely doesn’t fit this description.

, ,

26 Comments

Milton Friedman’s Distortions, Part II

I have previously noted that Milton Friedman’s debating techniques and attitude towards facts were, erm, slippery to say the least. However, I focused primarily on his public face, and it seemed he could merely have adopted a more accessible narrative to get his point across, losing some nuance along the way. It could be argued that most are guilty of this, and it didn’t reflect Friedman’s stature as an academic.

Sadly, this is probably not the case. Commenter Jan quotes Edward S. Herman on Friedman’s academic record, giving us reason to believe that Friedman’s approach extended through to his academic work. It appears the man was prepared to conjure ‘facts’ from nowhere, massage data and simply lie to support his theories. With thanks to Jan, I’ll channel some of what Herman says, using it to discuss Friedman’s major academic contributions in general, and how his record seems to be rife with him torturing the facts to fit his theories.

The Permanent Income Hypothesis

The Permanent Income Hypothesis (PIH) states that a consumer’s consumption is not only a function of their current income, but of their lifetime income. Since people tend to earn more as they get older, this means that younger generations will tend to borrow and older generations will tend to save. The PIH has been a key tenet of economic theory since its inception, and Friedman won the Nobel Memorial Prize for it in 1976.

When discussing Friedman’s in-depth empirical treatment of the PIH, Paul Diesing (as quoted by Herman) found it wanting. He listed six ways Friedman manipulated the data:

1. If raw or adjusted data are consistent with PI, he reports them as confirmation of PI
2. If the fit with expectations is moderate, he exaggerates the fit.
3. If particular data points or groups differ from the predicted regression, he invents ad hoc explanations for the divergence.
4. If a whole set of data disagree with predictions, adjust them until they do agree.
5. If no plausible adjustment suggests itself, reject the data as unreliable.
6. If data adjustment or rejection are not feasible, express puzzlement. ‘I have not been able to construct any plausible explanation for the discrepancy’…

It does not surprise me that Friedman had to treat the data this way to get the results he wanted. For the interesting thing about the PIH is that it displaced a model that was far more plausible and empirically relevant: the Relative Income Hypothesis (RIH). The RIH argues that individual consumption patterns are in large part determined by the consumption patterns of those around them, so people consume to “keep up with the Joneses“. It was developed by James Duesenberry in his 1949 book Income, Saving and the Theory of Consumer Behaviour.

In his discussion of this apparent scientific regression, Robert Frank lists 3 major ‘stylised facts’ any theory of consumption must be consistent with:

  • The rich save at higher rates than the poor;
  • National savings rates remain roughly constant as income grows;
  • National consumption is more stable than national income over short periods.

The PIH can easily explain the last two phenomena, as it posits that saving (and therefore consumption) is unrelated to current income. However, this same proposition required Friedman to dismiss the first phenomenon outright. He therefore suggested that the high savings rates of the rich resulted from windfall gains rather than income. A neat hypothesis, but unsubstantiated by the evidence: savings rates also rise with increases in lifetime income.

Conversely, Duesenberry’s theory is well equipped to explain all three of the listed phenomena. The RIH implies that poor consume a higher percentage of their income to keep up with the consumption of the rich. As society as a whole becomes richer, this phenomenon will not disappear, as the poor will still be relatively poor. Thus, the apparently contradictory first two points in Frank’s list are reconciled. It is also worth noting that the third point, that consumption is less volatile than income over short periods, can be explained by the RIH because people are used to their current standard of living, so they will sustain it even through hard economic times.

So why, despite fitting the facts without manipulating them, did the RIH fall out of favour? Presumably, it made economists (particularly those of Friedman’s ilk) uncomfortable because of its implications that much consumption was unnecessary and wasteful, that redistributing income might spur consumption and therefore growth, and because it did not rest on innate individual preferences but on the behavior of society as a whole. The idea of a consumer rationally making inter-temporal consumption decisions in a vacuum was just, well, it was real economics. The result is that Friedman’s poorly supported hypothesis shot to fame, while Duesenberry’s well supported hypothesis was forgotten.

The NAIRU

NAIRU stands for ‘Non-Accelerating Inflation Rate of Unemployment’, and it implies that past a certain level of unemployment, workers will be able to demand wages so high that they will create a wage-price spiral. Hence, policy should aim for a ‘natural’ rate of unemployment, decided empirically by economists, in order to prevent the possibility of 1970s-style stagflation.

My first problem with the NAIRU is the way it is commonly seen as ‘overthrowing’ the naive post-war Keynesians who insisted on a simplistic trade off between inflation and unemployment. As I have previously noted, the originator of the curve, William Phillips, did not believe this; nor did Keynes; nor were the post-war Keynesians unaware of the possibility of a wage-price spiral. Furthermore, the NAIRU idea was really just a formalisation of a long standing conservative notion that we should keep some percentage of people unemployed for some reason. In this sense, the launch of the NAIRU was more a counter revolution of old ideas than a novel approach.

However, the real issue is whether the NAIRU is empirically relevant, and it seems it is not. First, as Jamie Galbraith has detailed, there is little evidence that unemployment has an accelerating effect on inflation at any level. Furthermore, empirical estimates of the NAIRU seem to move around so much, depending on the current rate of unemployment, that the idea has little in the way of predictive implications. The data simply do not generate a picture consistent with a clear value of unemployment at which inflation starts to accelerate;: we are far better off pursuing full employment while keeping numerous inflation-controlling mechanisms in place.

“OK” you say. “Perhaps the NAIRU does not exist. But what about this was disingenuous on Friedman’s part?” Well, the notion that the interplay between workers and employers is a key determinant of the rate of inflation flat out contradicts Friedman’s oft-repeated exclamation that “inflation is always and everywhere a monetary phenomenon”. If inflation is purely monetary, then the level of unemployment should not affect it at all! However, for whatever reason, Friedman was prepared to endorse both the NAIRU and his position on inflation simultaneously.

The Great Depression

Friedman’s Great Depression narrative was probably his biggest attempt to rehabilitate capitalism in a period where unregulated markets had fallen out of favour. He blamed the crash on the Federal Reserve for contracting the money supply in the face of a failing economy. This always struck me as strange – he was, in effect, arguing that the Great Depression was the fault of ‘the government’ because they failed to intervene sufficiently. This implies that the real source of the Great Depression came from somewhere other than the Federal Reserve, and therefore its sin was more one of omission than commission. Even if we accept the idea that the Great Depression was worsened by the action (or inaction) of central banks, Friedman is being disingenuous when he says that the Great Depression was “produced” by the government.

However, even Friedman’s own figures fail to support his hypothesis: according to Nicholas Kaldor, the figures show that the stock of high powered (base) money increased by 10% between 1929 and 1931. Peter Temin came to a similar conclusion: using the same time period as Kaldor, real money balances increased by 1-18% depending on which metric you use, and the overall money supply increased by 5%. Though base money contracted by about 2% at the onset of the crash, a contraction this small is a relatively common occurrence and not generally associated with depressions.

There is then the issue of causality. In many ways Friedman assumed what he wanted to prove: that the money supply is controlled by the central bank. Yet there are good reasons to doubt this, and believe that movements in income instead create a decrease in the money supply, which would absolve the central bank of responsibility. When economists such as Nicholas Kaldor pointed out this possibility, Friedman reached a new level of disingenuous (the first paragraph is Friedman’s comment; Kaldor responds in the second):

The reader can judge the weight of the casual empirical evidence for Britain since the second world war that Professor Kaldor offers in rebuttal by asking himself how Professor Kaldor would explain the existence of essentially the same relation between money and income for the U.K. after the second world war as before the first world war, for the U.K. as for the U.S., Yugoslavia, Greece, Israel, India, Japan, Korea, Chile and Brazil?

The simple answer to this is that Friedman’s assertions lack any factual foundation whatsoever. They have no basis in fact, and he seems to me to have invented them on the spur of the moment. I had the relevant figures extracted from the IMF statistics for 1958 and for each of the years 1968 to 1979, for every country mentioned by Friedman and a few others besides… Though there are some countries (among which the US is conspicuous) where in terms of the M3 the ratio has been fairly stable over the period of observation, this was not true of the majority of others.

Bottom line? Friedman had to assume his conclusion – that the money supply was in control of the Federal Reserve – in order to reach it. Yet, based on his own numbers, his conclusion was still false, as the money supply increased over the ‘crash’ period from 1929-1931. When Friedman was pushed on these matters, he simply made things up. However, lying hasn’t helped him escape the fact that his theory of the Great Depression is false.

Conclusion

Milton Friedman’s academic contributions do not stand up to scrutiny. Friedman seemed to be prepared to conjure up neat, ad hoc explanations for certain phenomena, simply asserting facts and leaving it for others to see if they were true or not, which they usually weren’t. He selectively interpreted his own data, exaggerating or plain misrepresenting it in order to make his point. Furthermore, his methods should be unsurprising given his incoherent methodology, which allowed him to dodge empirical evidence on the grounds of an ill-defined ‘predictive success’, something which sadly never materialised. In almost any other discipline, Friedman’s attempts at ‘science’ would have been laughed out of the room. Serious economists should distance themselves from both him and his contributions.

, , , , , ,

69 Comments

Follow

Get every new post delivered to your Inbox.

Join 1,040 other followers