The Crisis & Economics, Part 5: “Shhh! We’re Working On It”

This is part 5 in my series on how the financial crisis is relevant for economics (parts 1, 2, 3 & 4 are here). Each part explores an argument economists have made against the charge that the crisis exposed fundamental failings of their discipline. This post explores the possibility that macroeconomics, even if it failed before the crisis, has responded to its critics and is moving forward.

#5: “We got this one wrong, sure, but we’ve made (or are making) progress in macroeconomics, so there’s no need for a fundamental rethink.”

Many macroeconomists deserve credit for their mea culpa and subsequent refocus following the financial crisis. Nevertheless, the nature of the rethink, particularly the unwillingness to abandon certain modelling techniques and ideas, leads me to question whether progress can be made without a more fundamental upheaval. To see why, it will help to have a brief overview of how macro models work.

In macroeconomic models, the optimisation of agents means that economic outcomes such as prices, quantities, wages and rents adjust to the conditions imposed by input parameters such as preferences, technology and demographics. A consequence of this is that sustained inefficiency, unemployment and other chaotic behaviour usually occur when something ‘gets in the way’ of this adjustment. Hence economists introduce ad hoc modifications such as sticky prices, shocks and transaction costs to generate sub-optimal behaviour: for example, if a firm’s cost of changing prices exceeds the benefit, prices will not be changed and the outcome will not be Pareto efficient. Since there are countless ways in which the world ‘deviates’ from the perfectly competitive baseline, it’s mathematically troublesome (or impossible) to include every possible friction. The result is that macroeconomists tend to decide which frictions are important based on real world experience: since the crisis, the focus has been on finance. On the surface this sounds fine – who isn’t for informing our models with experience? However, it is my contention that this approach does not offer us any more understanding than would experience alone.

Perhaps an analogy will illustrate this better. I was once walking past a field of cows as it began to rain, and I noticed some of them start to sit down. It occurred to me that there was no use them doing this after the storm started; they are supposed to give us adequate warning by sitting down before it happens. Sitting down during a storm is just telling us what we already know. Similarly, although the models used by economists and policy makers did not predict and could not account for the crisis before it happened, they have since built models that try to do so. They generally do this by attributing the crisis to frictions that revealed themselves to be important during the crisis. Ex post, a friction can always be found to make models behave a certain way, but the models do not make identifying the source of problems before they happen any easier, and they don’t add much afterwards, either – we certainly didn’t need economists to tell us finance was important following 2008. In other words, when a storm comes, macroeconomists promptly sit down and declare that they’ve solved the problem of understanding storms.  It would be an exaggeration to call this approach tautological, but it’s certainly not far off.

There is also the open question of whether understanding the impact of a ‘friction’ relative to a perfectly competitive baseline entails understanding its impact in the real world. As theorists from Joe Stiglitz to Yanis Varoufakis have argued, neoclassical economics is trapped in a permanent fight against indeterminacy: the quest to understand things relative to a perfectly competitive, microfounded baseline leads to aggregation problems and intractable complexities that, if included, result in “anything goes” conclusions. To put in another way, the real world is so complex and full of frictions that whichever mechanics would be driving the perfectly competitive model are swamped. The actions of individual agents are so intertwined that their aggregate behaviour cannot be predicted from each of their ‘objective functions’. Subsequently, our knowledge of the real world must be informed by either models which use different methodologies or, more crucially, by historical experience.

Finally, the ad hoc approach also contradicts another key aspect of contemporary macroeconomics: microfoundations. The typical justification for these is that, to use the words of the ECB, they impose “theoretical discipline” and are “less subject to the Lucas critique” than a simple VAR, Old Keynesian model or another more aggregative framework. Yet even if we take those propositions to be true, the modifications and frictions that are so crucial to making the models more realistic are often not microfounded, sometimes taking the form of entirely arbitrary, exogenous constraints. Even worse is when the mechanism is profoundly unrealistic, such as prices being sticky because firms are randomly unable to change them for some reason. In other words, macroeconomics starts by sacrificing realism in the name of rigour, but reality forces it in the opposite direction, and the end result is that it has neither.

Macroeconomists may well defend their approach as just a ‘story telling‘ approach, from which they can draw lessons but which isn’t meant to hold in the same manner as engineering theory. Perhaps this is defensible in itself, but (a) personally, I’d hope for better and (b) in practice, this seems to mean each economists can pick and choose whichever story they want to tell based on their prior political beliefs. If macroeconomists are content conversing in mathematical fables, they should keep these conversations to themselves and refrain from forecasting or using them to inform policy. Until then, I’ll rely on macroeconomic frameworks which are less mathematically ‘sophisticated’, but which generate ex ante predictions that cover a wide range of observations, and which do not rely on the invocation of special frictions to explain persistent deviations from these predictions.

, , , ,

8 Comments

Pieria: The Ethics of Inequality

New post on Pieria, discussing why inequality could be ethically ‘wrong':

What is inequality?

Inequality is a situation where certain people have access to things – places, goods, services – which others do not. Historically, inequalities have often been enforced by fiat, such as aristocracies and guilds, or perhaps based on group characteristics, such as apartheid or slavery. In capitalist societies, we typically use property rights to restrict peoples’ access to resources. A poor man who walks into a store and tries to take something without paying will be prevented from doing so by security or the police, while a rich man who pays will not. The same applies to private schools, expensive social clubs or fine works of art. Unless you have a sufficient number of vouchers (money), you are legally and socially restricted from access to the overwhelming majority of resources in society.

Justifying inequality therefore entails arguing why some deserve more of these vouchers, and hence greater access to places, to goods and services, to social opportunities, than others. Defenders of inequality typically rely on one of 3 ethical arguments: just deserts, voluntarism, and grow the pie. I will consider each of these arguments in turn.

As I said on twitter, the article was definitely influenced by Matt Bruenig, but for balance here’s me saying similar things quite a while ago. The point is that contemporary debate often has it backwards: it is asked why exactly we should reduce inequality, as if that is some sort of natural baseline. But if you accept that people are born equal (which most do, even if they don’t like to say it out loud), then the question is why some are more restricted from pieces of the world than others. Defenders of inequality sometimes proceed as if the 3 ethical arguments above override any other concerns.

,

9 Comments

The Illusion of Mathematical Certainty

Nate Silver’s questionable foray into predicting World Cup results got me thinking about the limitations of maths in economics (and the social sciences in general). I generally stay out of this discussion because it’s completely overdone, but I’d like to rebut a popular defence of mathematics in economics that I don’t often see challenged. It goes something like this:

Everyone has assumptions implicit in the way they view the world. Mathematics allows economists to state our assumptions clearly and make sure our conclusions follow from our premises so we can avoid fuzzy thinking.

I do not believe this argument stands on its own terms. A fuzzy concept does not become any less fuzzy when you attach an algebraic label to it and stick it into an equation with other fuzzy concepts to which you’ve attached algebraic labels (a commenter on Noah Smith’s blog provided a great example of this by mathematising Freud’s Oedipus complex and pointing out it was still nonsense). Similarly, absurd assumptions do not become any less absurd when they are stated clearly and transparently, and especially not when any actual criticism of these assumptions is brushed off the grounds that “all models are simplifications“.

Furthermore, I’m not convinced that using mathematics actually brings implicit assumptions out into the open. I can’t count the amount of times that I’ve seen people invoke demand-supply without understanding that it is built on the assumption of perfect competition (and refusing to acknowledge this point when challenged). The social world is inescapably complex, so there are an overwhelming variety of assumptions built into any type of model, theory or argument that tries to understand it. These assumptions generally remain unstated until somebody who is thinking about an issue – with or without mathematics – comes along and points out their importance.

For example, consider Michael Sandel’s point that economic theory assumes the value or characteristics of commodities are independent of their price and sale, and once you realise this is unrealistic (for example with sex), you come to different conclusions about markets. Or Robert Prasch’s point that economic theory assumes there is a price at which all commodities will be preferred to one another, which implies that at some price you’d substitute beer for your dying sister’s healthcare*. Or William Lazonick’s point that economic theory presumes labour productivity to be innate and transferable, whereas many organisations these days benefit from moulding their employees’ skills to be organisation specific. I could go on, but the point is that economic theory remains full of implicit assumptions. Understanding and modifying these is a neverending battle that mathematics does not come close to solving.

Let me stress that I am not arguing against the use of mathematics; I’m arguing against using gratuitous, bad mathematics as a substitute for interesting and relevant thinking. If we wish to use mathematics properly, it is not enough to express properties algebraically; we have to define the units in which these properties are measured. No matter how logical mathematics makes your theory appear, if the properties of key parameters are poorly defined, they will not balance mathematically and the theory will be logical nonsense. Furthermore, it has to be demonstrated that the maths is used to come to new, falsifiable conclusions, rather than rationalising things we already know. Finally, it should never be presumed that stating a theory mathematically somehow guards that theory against fuzzy thinking, poor logic or unstated assumptions. There is no reason to believe it is a priori desirable to use mathematics to state a theory or explore an issue, as some economists seem to think.

*This has a name in economics: the axiom of gross substitution. However, it often goes unstated or at least underexplored: for example, these two popular microeconomics texts do not mention it all.

, , ,

21 Comments

The Crisis & Economics, Part 4: Masters of the Universe?

This is part 4 in my series on economics and the crisis, which asks whether economics is really responsible for policy, and if so, how these policies may have contributed to the financial crisis. Here are parts 1, 2 & 3.

#4: “Mainstream economics cannot be blamed for politicians inflating housing bubbles/pursuing austerity/deregulating the financial sector; our models generally go against this. Clearly, we do not have that much influence over policy.”

This defence really raises two questions. The first is whether or not economic theory has had a major influence on policy. The second is whether or not this influence, if it exists, is culpable in creating the financial crisis.

The first is, in my opinion, easily answered in the affirmative. While it’s entirely understandable that the majority of academic economists would scoff at the idea that they effect policy, this doesn’t have to be the case for economic theory itself to hold sway among governments. After all, economics graduates are highly sought after and employed in policymaking positions. Famous economists lunch with the president; textbooks and macroeconomic papers are full of policy discussions; prize-winning economists such as Bob Shiller acknowledge that a “problem with economics is that it is necessarily focused on policy, rather than discovery of fundamentals.” It’s hard to imagine powerful institutions such as Central Banks, the World Bank or the IMF functioning with advice from any but economists, and government organisations are even set up based on new ideas coming out of economics. Economics is the language in which the media discuss policy: demand, stimulus, markets, etcetera. I could go on.

However, as economists like to remind us, there’s no reason to believe that advice based on mainstream economic theory should have led to the types of ‘free market’ policies typically implicated in the financial crisis and its aftermath. Even a basic economics education will leave you with an awareness of things like information asymmetry, moral hazard and externalities, and few economists support wanton deregulation of the financial sector. Modern macroeconomics is loosely pro-stimulus,  not pro-austerity. So what’s going on?

First, it should be noted that not only ‘free market’ thinking was implicated in the crisis. Central Banks around the world used inflation targeting, based on the New Keynesian idea that this would be sufficient to achieve macroeconomic stability, which blinded them to problems brewing in the financial sector. What’s more, the approach to regulation favoured by economics was, not atypically, quite narrow and didn’t favour systemic thinking. For example, I have previously spoken about Value at Risk (VaR) regulation, which forces firms to sell off assets when markets are volatile and hence increase their insurance against risk. However, while this looks good from the perspective of individual firms, it worsens systemic risk because the asset sell-offs result in increased volatility. Overall, the reductionist nature of economic theory tended to blind policymakers to systemic problems and made them focus on the wrong variables, things they might not have done if they’d been familiar with more holistic viewpoints.

Having said this, it’s clear that at the heart of the financial crisis were lax regulatory policies, justified by a belief in the self-stabilising power of financial markets. And while a majority of individual economists may not endorse such a view, theoretical frameworks or ‘ways of thinking‘ came out of economics which were used to justify this deregulation. Whether or not efficient markets, perfect competition, rational expectations and other theories which imply financial markets will run smoothly are endorsed by most economists, the fact that they are common knowledge in economics (and usually the benchmark for more complex analysis) is significant. As I’ve argued before, familiarity with economic theory lends itself to a pro-market view, even if a lot of modern work is done pushing the core framework away from this. And as I’ve argued before, the nuances of this work are often lost in popular translation, as the elegance of the most Panglossian theories proves too tempting when economists speak to the public. Alternative theories which use different starting points for analysis, such as input-output matrices, sectoral balances, or class struggle, would help to combat the deeply ingrained nature of the neoclassical theories.

This issue does not necessarily fit into a narrow ‘government versus market’ policy perspective. Instead, the point is that acknowledging different approaches in economic theory can give us a different way of thinking about policy, illuminating rather than obfuscating debates. A key complaint about economics graduates is that they have overly narrow, abstract tools, so the enemy is not so much any particular approach as it is one sided thinking. Providing both economics students and professional economists with an awareness of  different theories, as well as making economics more politically, historically and ethically engaged, would hopefully at least temper the zeal and enthusiasm with which pet policies are recommended, and partially dislodge whatever pedestal economics currently sits on as a rationale for policy.

, , ,

6 Comments

Pieria: Perverting Piketty

I have a new post on Piketty on Pieria, pointing out potential problems interpreting his premises and propositions (sorry, it started organically):

I recently wrote about the numerous misconceptions over Thomas Piketty’s use and definition of capital in his book Capital in the 21st Century. Sadly, it seems there are a number of other common, equally important mischaracterisations of Piketty’s model floating around. Here I will consider 5 of the most widespread and show, using direct quotes from Piketty himself, why they are off the mark. The first 3 are simple errors of interpretation with regards to Piketty’s theoretical framework, while the latter 2 are problems with how people have responded to Piketty in general. Although the latter 2 are inevitably more subjective, they are still important for trying to understand and reframe the debate between Piketty and his critics.

Each point gives the common misinterpretation of Piketty’s work, and counters it. For example, one of the most important points (IMO) is this one:

2. ‘Fundamental laws’ of capitalism?

The claim: Piketty’s ‘fundamental laws of capitalism’ are not fundamental at all.

The reality: Although calling them ‘laws’ is misleading, at no point does Piketty claim that his laws are inviolable. They are instead tendencies (with the exception of the first law, which is just an accounting identity) which push capital’s share of income in a certain direction over time, but can be counteracted by any number of things, and only take hold over a long timespan.

Hopefully this will be a useful resource for when people who haven’t read the book (or worse, have read it but clearly either rushed or lack reading comprehension) repeat silly canards about it.

,

6 Comments

The Crisis & Economics, Part 3: Econoracles

This is part 3 in my series on why and how the 2008 financial crisis is relevant to economics. The first instalment discussed why the good times during the boom are no excuse for the bad times during the bust. The second instalment discussed use of the Efficient Markets Hypothesis (EMH) to defend economists’ inability to forecast the movements of financial markets. This instalment discusses the more general proposition that crises are events whose prediction is outside the grasp of anyone, including economists.

#3: “Economists aren’t oracles. Just as seismologists don’t predict earthquakes and meteorologists don’t predict the weather, we can’t be expected to predict recessions.”

This argument initially sounds quite persuasive: the economy is complex, and the future inherently unknowable, so we shouldn’t expect economists to predict the future any better than we’d expect from other analysts of complex systems. However, the argument is actually a straw man of what critics mean when they say economists didn’t foresee the recent crisis. It confuses conditional predictions of the form “if you don’t do something about x, y might happen” with oracle-esque predictions of the form “y is going to happen on December 2003″. Nobody should have expected the details of crisis – many of which were hidden – to be foreseen, and much less a prediction about exactly which banks would fail and when. Instead, what is expected is for economists to have the key indicators right and know how to deal with them, to be alert to the possibility of crisis at all times – even in seemingly tranquil periods – and to have measures in place to cushion the blow should a crisis occur.

In fact, those who study earthquakes or hurricanes do ‘predict’ them in the above sense: they understand where they’re most likely to occur (for example near fault lines), and at roughly which frequency, time and magnitude. They also have an idea of how best to combat them: areas which are prone to earthquakes and hurricanes – funding permitting – have dwellings built in such a way that they can withstand such occurrences. They understand why disasters happen, and their models tell us why they cannot be predicted. For example, it is common knowledge that weather forecasts get less accurate the further away they are due to the sensitivity of the model to initial conditions, a point based on complex mathematics but communicated well by meteorologists (not to mention that weather forecasts are improving all the time).

While there’s been a lot of kerfuffle over exactly who ‘predicted’ the crisis and what that means, the most important point is that those who did warn of a crisis like the one we’re going through identified key mechanisms (debt build up, asset price bubbles, global imbalances) and argued that, unless these processes were combated, we’d be in danger. I appreciate that the ‘stopped clock’ problem really is a problem: there are so many people predicting crises that eventually, one of them will seem to be right. However, this is easily countered by using the same framework to make predictions outside the crisis (predictions in the general sense of the word, not just about the future). For example, Peter Schiff predicted a financial crisis quite a lot like the one we’ve been through, but he also predicted hyperinflation, suggesting that his model is wrong in some way. Conversely, endogenous money models are consistent with both the financial crisis and the subsequent weak effects of monetary stimulus: since money is created as debt, private debt can have major effects on the economy, and since banks do not lend based on reserves, there’s no reason for an increased monetary base do produce inflation.

Finally, while natural disasters are almost entirely exogenous phenomena, the economy is a social system, so we have a degree of control over it, both individually and collectively. It’s perhaps a testament to how the neoclassical approach naturalises the economic system that some economists feel recessions can be compared to natural disasters (not that this would mean they had no responsibility for alleviating their effects). Since economic models are frequently used to inform government policy, it’s quite clear that economists appreciate this point; however, since they often admit they don’t really understand what causes recessions, they are doing the equivalent of sending us up in toy planes. It’s fair to say that you don’t fully understand the economy; it’s quite another thing to say this, then recommend ways to manage it. But the relationship between economists and policy is a matter for the next part of the series.

The next instalment will be part 4: masters of the universe?

, ,

12 Comments

Pieria: Capital in Piketty’s ‘Capital’

I have a new post on Pieria, where I finally get round to commenting on Thomas Piketty’s Capital in the 21st Century. My focus is on capital itself, how Piketty defines this and whether or not critics such as Jamie Galbraith are right to attack him for his choice of definition:

An important but perhaps under-discussed aspect of Thomas Piketty’s Capital in the 21st Century is Piketty’s definition of capital itself, and the implications this has for his thesis and its critics. Capital is a notoriously tricky to define concept, and many have taken issue with Piketty’s definition and the framework he builds around it. Typically, the implication is that a more Correct understanding of capital leads to vastly different conclusions to Piketty’s, especially with regards to his conclusions on inequality.

The verdict is that Piketty’s definition of capital is a lot more nuanced than critics make out, and typically (though not always) their critique just reflects a pet peeve of theirs, whether this is human capital, the CCCs or what have you. It’s not that Piketty’s definition is ‘correct’, or that it chimes well with other historical usages of the term (such as Marx’s); it’s merely that Piketty’s own definition is sufficient for showing what he wants to show: the dynamics of inequality under capitalism.

I’m also not really sure about Paul Krugman’s contention that Piketty “relies mainly on conventional, mainstream economics” – sure, he uses some mainstream concepts, but begrudgingly, and only as one angle of support for his broader historical, political and statistical analysis. This analysis stands or falls apart from frameworks like the production function, marginal productivity theory or the Solow Growth Model, even if some economists are eager to interpret it entirely within such frameworks. The fact is that while Piketty’s work cannot be construed as purely ‘heterodox’ or ‘mainstream’, it’s definitely far closer to how economics should look in the future: holistic, empirical, and using mathematics only when needed. Hopefully economists of all stripes can recognise this instead of focusing too much on unimportant details.

, ,

11 Comments

The Crisis & Economics, Part 2: The EMH-Twist

This is part 2 in my series on why and how the 2008 financial crisis is relevant to economics. The first instalment discussed why the good times during the boom are no excuse for the bad times during the bust. This instalment discusses the use of the Efficient Markets Hypothesis (EMH) to defend economists’ inability to forecast the movements of financial markets, hereafter referred to as the ‘EMH-twist’.

Argument #2: ““The EMH claims that crises are unpredictable, so the fact that economists didn’t predict the crisis is not a problem for economics at all.”

As far as I’m aware, this argument was first used by John Cochrane, and it has reappeared multiple times since then: for example, it was more recently referenced by Andrew Lilco, who was sadly echoed by the generally infallible Chris Dillow. The idea is that financial markets process new information faster than any one individual, government or institution could, and so for most people they may seem to behave unpredictably. However, economists can not be expected to understand these sudden movements better than anyone else, so expecting them to foresee market crashes is absurd. As Cochrane puts it, “it makes no sense whatsoever to try to discredit efficient market theory in finance because its followers didn’t see the crash coming”.

However, this logic is completely circular. The mere fact that a theory exists which claims crises are unpredictable does not mean that, if a crisis is not predicted – particularly by the proponents of said theory – this shows the theory is correct. If the EMH had, to the best of our knowledge, been shown to be correct, then the EMH-twist might hold some water, but we must establish this truth separately from the the fact its proponents didn’t predict the crisis (David Glasner recently made a similar point about the ubiquitous use of rational expectations in macroeconomics). While Cochrane does claim that the central tenet of the EMH “is probably the best-tested proposition in all the social sciences”, he fails to reference supporting evidence, and in fact goes on to add substantial qualifications to the empirical record of the EMH, admitting that market volatility might happen “because people are prey to bursts of irrational optimism and pessimism”.

It is not necessarily my aim to establish the truth or falsity of the EMH here: it has been discussed extensively elsewhere. However, there are a couple of key tests for whether or not it applies to 2008. The first is whether or not anybody - both adherents and detractors of the theory – foresaw the crisis. While the EMH claims nobody could, this is clearly wrong: some people in finance made a lot of money; some economists not only called it but had frameworks that explained it well once it happened; quite a few people (even mainstream economists) at least noted the existence of a housing bubble. The EMH can attribute these predictions to simple luck, but now we’re back to circularity: assume the EMH is true, then appeal to it to rationalise any possible market movement. The second test of the EMH, since it depends on new information to trigger volatility, is to ask exactly what new information became available just before the crash. However, the financial instruments key to 2008 were used by investment banks for a good few years prior to the crash, so it’s quite difficult to claim that new information about these suddenly became available in 2007-8. Instead, what happened was a collective realisation that everyone knew very little about the products they’d been trading, resulting in a classic panic.

In fairness, there is an element of truth to the EMH-twist. Financial markets are incredibly difficult to understand, and the argument that economists don’t yet understand them, along with a mea culpa, might be acceptable – there are many things natural scientists still don’t understand, such as dark matter, or what happened ‘before’ the big bang. However, the EMH-twist as used by Cochrane et al is phrased more strongly: it is the assertion that economists can’t and shouldn’t understand the movements of financial markets, simply because the EMH allows them to wash their hands of the task. We wouldn’t accept this kind of attitude from any other field, so I can’t help but feel Cochrane’s claim that “the economist’s job is not to ‘explain’ market fluctuations after the fact” can only be met with: “then what is the economists’ job, exactly?”

The next instalment in the series will be part 3: econoracles.

, ,

16 Comments

The Crisis & Economics, Part 1: The Boom & The Bust

For critics of mainstream economics, the 2008 financial crisis represents the final nail in the coffin for a paradigm that should have died decades ago. Not only did economists fail to see it coming, they can’t agree on how to get past it and they have yet to produce a model that can understand it fully. On the other hand, economists tend to see things quite differently – in my experience, your average economist will concede that although the crisis is a challenge, it’s a challenge that has limited implications for the field as a whole. Some go even further and argue that it is all but irrelevant, whether due to progress being made in the field or because the crisis represents a fundamentally unforeseeable event in a complex world.

I have been compiling the most common lines used to defend economic theory after the crisis, and will consider each of them in turn in a series of 7 short posts (it was originally going to be one long post, but it got too long).  I’ve started with what I consider the weakest argument, with the quality increasing as the series goes on. Hopefully this will be a useful resource to further debate and prevent heterodox and mainstream economists (and the public) talking past each other. Let me note that I do not intend these arguments as simple ‘rebuttals’ of every point (though it is of some, especially the weaker ones), but as a cumulative critique. Neither am I accusing all economists of endorsing all of the arguments presented here (especially the weaker ones).

Argument #1: “We did a great job in the boom!”

I’ve seen this argument floating around, and it actually takes two forms. The first, most infamously used by Alan Greenspan – and subsequently mocked by bloggers – is a political defense of boom-bust, or even capitalism itself: the crisis, and others like it, are just noise around a general trend of progression, and we should be thankful for this progression instead of focusing on such minor hiccups. The second form is more of a defence of economic theory: since the theory does a good job of explaining/predicting the boom periods, which apply most of the time, it’s at least partially absolved of failing to ‘predict’ the behaviour of the economy. Both forms of the argument suffer from the same problems.

First, something which is expected to do a certain job – whether it’s an economic system or the economists who study it – is expected to do this job all the time. If an engineer designs a bridge, you don’t expect it to stand up most of the time. If your partner promises to be faithful, you don’t expect them to do so most of the time. If your stock broker promises to make money but loses it after an asset bubble bursts, you won’t be comforted by the fact that they were making money before the bubble burst. And if an economic system, or set of policies, promise to deliver stability, employment and growth, then the fact that it fails to do so every 7 years means that it is not achieving its stated objectives. In other words, the “invisible hand” cannot be acquitted of the charge of failing to do its job by arguing it only fails to do its job every so often.

Second, the argument implies there was no causal link between the boom and the bust, so the stable period can be understood as separate from the unstable period. Yet if the boom and the bust are caused by the same process, then understanding one entails understanding the other. In this case, the same webs of credit which fuelled the boom created enormous problems once the bubble burst and people found their incomes scarce relative to their accumulated debts. Models which failed to spot this process in its first phase inevitably missed (and misdiagnosed) the second phase. As above, the job of macroeconomic models is to understand the economy, which entails understanding it at all times, not just when nothing is going wrong – which is when we need them least.

As a final note, I can’t help but wonder if this argument, even in its general political form, has roots in economic theory. Economic models (such as the Solow Growth Model) often treat the boom as the ‘underlying’ trend, buffeted only by exogenous shocks or slowed/stopped by frictions. A lot of the major macroeconomic frameworks (such as Infinite Horizons or Overlapping Generations models) have two main possibilities: a steady-state equilibrium path, or complete breakdown. In other words, either things are going well or they aren’t – and if they aren’t, it’s usually because of an easily identifiable mechanism, one which constitutes a “notably rare exception” to the underlying mechanics of the model. Such a mentality implies problems, including recessions, are not of major analytical interest, or are at least easily diagnosed and remedied by a well-targeted policy. Subsequently, those versed in economic theory may have trouble envisaging a more complex process, whereby a seemingly tranquil period can contain the seeds of its own demise. This causes a mental separation of the boom and the bust periods, resulting in a failure to deal with either.

The next instalment in the series will be part 2: the EMH-twist

, ,

31 Comments

Should Libertarians Embrace ‘Left’-Heterodox Economics?

Recent posts by Noah Smith, David Henderson and Daniel Kuehn on the relationship between economics, ‘free markets’ and policy in general got me thinking about libertarians and how accepting they should be of marginalist economics, as well as how open they should be to non-marginalist alternatives. It seems to me there is an unspoken bond between marginalist economics and libertarianism (even Austrianism shares some major features with neoclassical economics), and so there may be a tendency for libertarians to have strong priors against post-Keynesian, Sraffian, Marxist, Behavioural, Ecological and other types of economics that dispute this general framework.

Let me note that I’m not accusing libertarians of being generally hostile to heterodox economics – I’m sure there are some who are and some who aren’t. Instead, I’m just warning against such a possibility, and offering some heterodox ideas to which libertarians might be receptive.

Behavioural/post-Keynesian consumer theory: behavioural economics sometimes elicits rebukes from libertarians, as it seems to imply that ordinary people are not able to make decisions rationally, and therefore that  policy makers should help them along their way. Naturally, libertarians object to this idea, questioning the experimental methods of behavioural economics, pointing out that policy makers are themselves imperfect, and so forth. I’m not going to comment on the efficacy of these arguments here – sometimes they are fair, sometimes less so. Instead, what I want to point out is that while some behavioural economics implies a role for activist policy, it’s not necessarily the case that a view of consumers which differs from the optimising agent renders the agent somehow irrational and therefore ripe for intervention.

One such example is a version of the mental accounting model, used in post-Keynesian consumer theory, in which consumers organise their budget into categories before making spending decisions. Consumers will not spend money in one category until they have had their needs in a more ‘basic’ or ‘fundamental’ category satisfied, which creates a Maslow-esque hierarchy of spending – starting with necessities such as food & shelter and culminating in yachts & modern art. This means relative price changes do not have as much of an impact on the type of goods bought as implied by the utility maximising model; instead, the amount spent on different types of goods is primarily determined by the consumer’s level of income.

On first inspection, this might seem to imply a tirade against the efficacy of the price system for coordinating preferences and scarcity, as well as a comment on the ‘wastefulness’ of inequality (and perhaps it could be interpreted as such). However, this doesn’t necessarily make the theory generally ‘anti-libertarian’. In fact, one major implication is that placing high taxes on something low in someone’s hierarchy will not have much impact on their spending, and hence ‘sin taxes’ – which are a major expense for the poor – will not reduce their consumption of  alcohol/smoking substantially; instead, these things will simply take up more and more of their income (which is pretty consistent with available evidence). This implies that paternalistic tax policies aimed at the poor will generally fail to achieve their aims.

The Market for Lemons (TML): George Akerlof’s famous paper explored information asymmetry, using used car markets as its primary example. Akerlof was trying to understand what buyers do when they face a product of unknown quality, and argued that since they are unsure, they will only be willing to bid the average expected value of a car in the market. However, if the seller is selling a ‘good’ car, its value will be above this average, so the seller will not sell it at the price the buyer offers. The result is that the best sellers drop out of the market, creating a cumulative process which results in the market unravelling completely.

Though theoretically neat and compelling, this ‘seminal’ example of market failure has always struck me as incredibly weak, for the simple reason that used car markets do not actually fall apart. Why? Maybe people aren’t rational maximisers etc etc (for example, in another nod to behavioural economics, it may be that buyers’ irrational overconfidence leads them to go ahead with a purchase, even if it’s statistically likely they’ll get a ‘lemon’). Ultimately, though, I’d argue the answer is that capitalism – or if you prefer, ‘the market’ – is a network of historically contingent institutions and social interactions, rather than abstract individuals trading in a vacuum where outcomes are mathematically knowable. The reason used car markets work ‘despite’ information asymmetry is due to hard-to-establish trust and norms between buyers and sellers, and due to intermediaries such as auto trader, who spring up to help both sides avoid being ripped off. I’ve not seen anyone provide an example of the process TML outlines actually occurring, so I don’t see why it adds to our understanding of markets.

To be fair to Austrians, they have been talking about ‘the market as a social process’ for a long time, and in places have disputed the Lemons Model on similar grounds to the above. Hence they have something in common with old institutionalists, Marxists (to a degree) and perhaps even hard-to-place heterodox economists like Tony Lawson, who argues economics should primarily be a historical, rather than mathematical subject. To put it another way, while heterodox economists typically advocate a move away from marginalist economics to understand why capitalism doesn’t work, such a move may also be necessarily to understand why it does.

Mark up Pricing: Post-Keynesian, Sraffian and Institutionalist economics typically subscribe to the cost-plus theory of prices, which states that businesses set prices at their average cost per unit, plus a mark up. Furthermore, they avoid price changes where possible, preferring to keep their prices stable for long periods of time to yield a target rate of profit, varying quantity rather than price, and keeping spare capacity and stocks so that they can do so. The problem libertarians might have with this is that it implies prices are somewhat arbitrary, do not usually ‘clear’ markets, and do not adjust to the preferences of consumers especially smoothly. However, while these things may be true, they do not mean mark up pricing comes with no benefits.

In my opinion, one such benefit is stability: I’m glad I can rely on prices only changing every so often, and that if there are a lot of people at the hairdressers he doesn’t raise the price to ‘clear’ the market. Furthermore, the fact that firms keep buffer stocks and can adjust quantity instead of price allows them to deal with uncertainty and unexpected demand more easily, making them more adaptable to real world conditions than if they always squeezed every drop out of their existing capacity. While I’m not going to pretend post-Keynesian pricing theory doesn’t imply some anti-libertarian policies (particularly with regards to price regulation), but it’s certainly not a one sided idea, and its policy implications are open to further interpretation.

I generally prefer to refrain from immediately linking everything to policy as I have done above, because, well, there are enough people doing that. However, the examples I’ve given actually help to demonstrate a point about the relationship between economic analysis and policy: theories with premises that seem to imply a certain policy may not once you’ve followed them through to their conclusions. What’s more, the same analysis can seem to imply different policies from different perspectives (at its most extreme, Austrian Business Cycle Theory seems to imply that even a teensy regulation will send capitalism off the rails, which could be interpreted as a damning criticism if you were a leftist). This means calls for pluralism in economics should be embraced by all, even if on the surface some ‘alternatives’ to mainstream economics seem to conflict with one’s world view.

, , ,

16 Comments

Follow

Get every new post delivered to your Inbox.

Join 956 other followers