Archive for category Economics
This is the final part in my series on how the financial crisis is relevant for economics (here are parts 1, 2, 3, 4, 5 & 6). Each part explores an argument economists have made against the charge that the crisis exposed fundamental failings of their discipline, with the quality of the arguments increasing as the series goes on. This post discusses probably the strongest claim that economists can make about the crisis: they do understand it, and any previous failures were simply due to inattention or misapplication, rather than fundamental problems with the theory itself.
Argument #7: “Economists had the tools in place, but we overspecialised and systemic problems caught us off guard.”
Raghuram Rajan was probably the first to take this sort of line, arguing that overspecialisation prevented economists from using the tools they had to foresee and deal with the crisis. But while Rajan’s piece also made a number of other criticisms of economics, over time the discipline seems to have reasserted this argument more strongly: not too long ago, Paul Krugman argued that although “few economists saw the crisis coming…basic textbook macroeconomics has performed very well”. Similarly, Tim Harford claimed at an INET conference last year that the tools necessary to understand the crisis already existed in mainstream economics, and the problem was simply one of knowing when and how to use them. He compared financial crises to engineering disasters, which were understandable using current knowledge but happened nonetheless, due to negligence or oversight on the part of the engineers.
So how true is this claim? Certainly, a number of economic models exist for understanding things like panics, liquidity problems and moral hazard. The most well known of these are the Diamond-Dybvig (DD) model of bank runs – which shows what happens when banks have liquid liabilities (such as demand deposits) which must be available at any time, but have illiquid assets (such as loans) which are not fully convertible to cash on demand – and the Akerlof-Romer (AR) model of financial ‘looting’, which shows that deposit guarantees may create moral hazard as investors gamble other peoples’ money. If you combine tools like these, which help us understand the financial sector, with tools like IS/LM, which tell us how to escape a downturn once it happens, in theory you have a pretty solid set of tools for dealing with the recent crisis.
The first objection I have to these models is that many of their insights could be considered trivial, or at least common sense. The DD model came to the conclusion that deposit insurance might be helpful way to prevent bank runs, which is hardly a revelation considering it came 50 years after FDR and the general public figured out the same thing. The AR model came to the conclusion that deposit insurance and limited liability might create perverse incentives as banks gamble ‘other peoples money’, which again must have been obvious to the policymakers who put Glass-Steagal and other financial regulations in place. Perhaps this point is a little harsh, and I don’t want to overstate it: on the whole, these papers are asking important questions, and in the case of AR they answer them well. Nevertheless, there’s no point in economic theory if it can’t tell us things we didn’t already know. Even the idea that central banks should provide emergency liquidity to banks in trouble is quite obvious, and it predates modern economic theory by a good while.
However, this is not the most important point. The issue I have with these models is that in many of them everything interesting happens outside the model. In Krugman’s favoured IS/LM, a ‘crisis’ is represented by a simple shift in the IS curve, which in English means that a decline in production is cause by…a decline in production. Where this decline came from is presumably a matter for outside the model. Even the most sophisticated macroeconomic models often follow a similar tack, merely describing what happens when the economy suffers from a shock, without exploring possible causes for the shock. Likewise, the DD model suggests bank runs happen because everyone panics, but what causes these panics is not explored: it is assumed depositors’ expectations are exogenous, whether fixed or following a stochastic (random) pattern. Yet studies such as Mishkin (1991) find that bank runs generally follow periods of stress elsewhere in the economy, a fact which DD simply cannot capture.
Economic models are narrowly focused like this because they are generally designed to answer straightforward questions about causality: does the minimum wage cause unemployment; does expansionary fiscal policy cause growth; does a mismatch between illiquid assets and liquid liabilities cause bank runs. But the crisis was an endogenously generated process in which different aspects of the economy – the housing market, the financial sector, government policy – combined to create something bigger than the sum of its parts, and in which it is not possible to isolate a single cause. Consider: the collapse of Lehman Brothers may have triggered the worst of the crisis, but was it really to blame? The economy was already in a fragile place due to systemic trends that can’t necessarily be traced to a single law, institution or actor. Just like the murder of Franz Ferdinand in World War 1, we have to look beyond the immediate and focus on the general if we truly want to understand what happened.
To sum up, the economists above want to argue that they are only culpable insofar as they overspecialised and failed to focus on the right areas in this particular instance. However, the reason for this was not just because of personal myopia; it’s because their chosen methodology means they lack the tools to do so. A model of one aspect of the economy which takes the effect of other areas as exogenous will fail to detect potential positive feedback loops and emergent properties. A model which takes the crisis itself as an exogenous ‘shock’ is even worse, and in many ways is hardly a model of the crisis at all, since it offers no understanding of why crises might happen in the first place. Are there alternatives? I have previously written about how post-Keynesian and Marxist models offer more comprehensive understandings of the financial crisis and antecedent decades; I shan’t repeat myself here. Other promising areas include network theory, evolutionary economics and Agent-Based Modelling. All of these share that they take the system as a whole instead of focusing on isolated mechanics.
I see the crisis in economics as a shock (!!) which hits macroeconomics hard and reverberates throughout the discipline. Regardless of the pleas of some, such events can be seen coming, and they cannot be handwaved away as part of an overall upward trend. And even if individual economists are not in control of policy, key economists have substantial influence, not to mention the theories and ideas in economics as a whole. Recent developments in macroeconomics still leave a lot to be desired, while previously existing tools suffer from similar problems: a lack of holism; a wooden insistence on microfoundations; and attempt to understand everything in terms of simplistic causal links, often relative to a frictionless baseline. Finally, although many areas of economics are not directly indicted by the crisis, many of them share key problems with macroeconomics, and as such the crisis should prompt at least a degree of introspection throughout the discipline.
John Quiggin recently posted on the “Broken Window Fallacy” (BWF), a parable beloved by libertarians, originating from Frederic Bastiat but finding its most modern exposition in Henry Hazlitt. The basic idea is that while breaking a window will seem to stimulate spending by providing work for a glazier, the money used to employ him could have been spent elsewhere, say by employing a tailor to make a new suit. Therefore, as a result of the broken window the community has only a window (what they started with), rather than both the original window and a new suit. We must look at the “unseen” in order to understand the true economic effects of smashing the window.
Quiggin tries to refute the fallacy thusly:
Implicit in the crowd’s reaction is the assumption that glaziers are short of work. If (as sometimes happens) glaziers have more jobs than they can handle, then there is no extra window – at best, the shopkeepers order simply displaces some other, less urgent, repair. Similarly, for Hazlitt’s riposte about the tailor to work, there must exist unemployed resources in the tailoring industry, so that the shopkeeper’s suit represents an addition to output. If not, the additional demand from the shopkeeper will raise the price of suits marginally, just enough to lead some other customer to buy one less suit. So, the story seems to imply that the economy is in recession, with unemployment across a wide range of industries.
Yet “rais[ing] the price of suits marginally” – such that the person most willing to pay receives the suit – is precisely what libertarians have in mind when they envision a market economy functioning nicely. Under the assumption of full employment and no broken window, the shopkeeper purchases a suit while the glazier is put to work elsewhere creating a new window. Under the assumption of full employment and a broken window, the shopkeeper employs the glazier, meaning that somebody who previously would have employed the glazier goes without, while the tailor is put to work for somebody who likes the suit slightly less than the shopkeeper. Aggregate welfare and wealth is decreased, even if the flow of production is the same.
Quiggin attempts to introduce the assumption of unemployment to counter the standard story:
With these facts in mind, we can tell a different story. Suppose that the glazier, having been out of work for some time, has worn out his clothes. Having fixed the window and been paid, he may take his $50 and buy a new suit. To make the story stop here, we’ll suppose that the tailor is a miser (a vice traditionally associated with the clothing industry, as with Silas Marner), and puts the money under his mattress. So, in this version of the story, the glazier and the tailor are both paid, and the social product is increased by a new window and a new suit.
But the social product is not increased. If the window were not broken, we’d have a window and a new suit. When the window is broken, we have a window and a new suit. The allocation of the suit has changed, but not the total product. Quiggin will never refute the BWF like this, on its own terms, because if you start with the premise that a window gets broken, you will inevitably end up at the conclusion that the world is worse off than before. Once the window has been broken, we have lost $50 worth of window and will have to replace it. Depending on your ethical presuppositions, you might view the redistribution as desirable, and the employment of the glazier as an end in itself, but this is another debate.
And this is the real problem with the BWF: it’s a complete straw man. Noone, anywhere, ever, has claimed that ‘breaking windows’ is a desirable economic strategy, or that it will somehow add to wealth or welfare. True, you can pick and choose your own auxiliary assumptions to add ‘silver lining’ to the story. Given that the window is broken, the fact that the glazier then wants to buy a new suit is better than if he just hoarded the money. Perhaps the new window is slightly nicer than the old one. Perhaps, as a commenter suggested, the shopkeeper has an emergency fund which is “psychologically separate” from his other money, so he still buys both the window and the suit. Or maybe the glazier has an apprentice who benefits from the training when he otherwise wouldn’t, while the tailor does not. We can do this all day but ultimately, the broken window devotes resources which could have been used – even if they were previously idle – to increasing wealth and welfare.
Should we utilise idle resources? The answer to this question needn’t have anything to do with breaking windows. Quiggin, like most critics of the BWF, implicitly recognises this, which is why he stresses that none of what he says “means that it’s a good idea to go around smashing windows during recessions.” So why start with the assumption of a broken window? We could instead tell an alternative story where there is no broken window. The tailor is unemployed, and there is a kid who wants a shirt but cannot afford it. The government prints $50 and gives it to the kid, who buys a shirt from the tailor, increasing the social product without any broken windows. This story is similarly arbitrary, demonstrating the ease with which we can formulate a parable to come to the conclusion we like. But the question of which story’s assumptions (in particular unemployment) are true or not is an empirical matter.
Quiggin, in trying to refute an abstract parable built on arbitrary assumptions by introducing his own, slightly different arbitrary assumptions, is fighting a losing battle. The BWF may or may not be useful for demonstrating a certain point, but it is not a model of the economy and it is not always and everywhere applicable to economic problems. If you are arguing with somebody who thinks repeating ‘Broken Window Fallacy’ at you will settle the debate, you aren’t going to convince them by telling them the ‘Broken Window Fallacy, version 2′. You simply need to stop having the debate in terms of Broken Windows, and start having it in terms of what is actually going on in the economy. Otherwise you’ll be forever trapped at a useless level of abstraction.
This is part 6 in my series on how the financial crisis is relevant for economics (here are parts 1, 2, 3, 4 & 5). Each part explores an argument economists have made against the charge that the crisis exposed fundamental failings of their discipline. This post discusses the argument that the ‘crisis in economics’ is confined only to macroeconomics, which is actually minority, so attacking all of economics is wrongheaded.
Argument #6: “Sure, modern macroeconomics is pretty weak. But most economists don’t even work on macro, so they are unaffected.”
Quite a lot of economists consider the debate about the financial crisis irrelevant to what they do. After all, why should a crisis at the macro level invalidate econometrics, game theory or auction theory? Attacking these fields and others for the recession is like blaming mechanical engineers for a bridge collapse. In fact, many economists hold macro in the same (low) esteem as the public: Daniel Hamermesh goes so far as to claim that “most of what the macro guys do in academia is just worthless rubbish”, but adds that the kind of field he works in “has contributed tremendously and continues to contribute”. Even the discipline’s most vehement defenders are willing to concede macroeconomics is bunk.
There is a considerable amount of truth to this view. While there may be critiques of all areas in economics, the claim that the financial crisis is what’s thrown them into disrepute is a non sequitur. Critics should therefore be careful to distinguish macroeconomists from their colleagues when (rightly) dismissing the former’s failure to deal with the crisis. Nevertheless, there are two major ways in which the failings of macroeconomics are symptomatic of more general problems with economic theory, so the discipline as a whole cannot be let off the hook.
The first is a lack of holism. A large amount of economic theories are built in an abstract theoretical vacuum, with little reference to what is happening around the individual agent. But the importance of the macroeconomy for behaviour in specific sectors or by specific actors cannot be ignored. For example, if you drop the macroeconomic assumption of full employment, this affects theories in areas from public goods provision to labour markets to Walrasian equilibrium. Consumers’ and firms’ expectations are strongly informed by the macroeconomic and political environment around them. Considering the effects of political institutions such as unions on the labour market, but ignoring their broader political role, can create narrow and misguided conclusions about their efficacy. New Institutional economics often takes ‘institutions’ as exogenous, failing to consider to two-way interaction between institutions and agents. The in-vogue ‘Randomised Control Trial’ restricts the economic environment to such a degree that it’s questionable whether one can generalise the results at all. And so forth.
Don’t get me wrong: there is an obvious case for different areas of economics being separate from one another: taking certain parameters as exogenous to look at a certain area, and using different tools for different areas. But even the most specialised fields should never forget the broader scope and context of their ideas, and this should be reflected in the theoretical approach. Thomas Piketty’s Capital is a shining example of how to intertwine theory, history, statistics and politics to build a better understanding of capitalism. Another is the attempt by ecological economists to place the economy in its environmental context, rather than simply taking resource endowments as a given and assuming pollution just sort of…disappears, save for its monetary cost. Minsky’s Financial Instability Hypothesis shows one way to make an effective link between the behaviour of investors and broader economic performance, integrating finance and macroeconomics. Overspecialisation may cause economists to miss these key insights.
The second issue is that many of the problems with macroeconomics can be applied to, or are relevant for, other areas of the discipline. One of the key complaints about macroeconomics – that it relies on microfoundations – is a problem precisely because it imports unrealistic assumptions about economic behaviour from microeconomics. The problem of having an abundance of abstract models, each seeking to explain one or two ‘things’, but with no real way to tell which model is applicable and when, applies not just to macroeconomics but also to behavioural economics, microeconomics, oligopoly theory. Endogenous money, which is central to macroeconomists’ lack of understanding of the crisis, also has major implications for finance. To reuse my above analogy, you might well be concerned about mechanical engineers after a bridge collapse if they largely relied on the same methods used by the civil engineers.
Your average economist is probably right to point out that the public’s ire should be focused not on them, but on macroeconomics. However, this doesn’t mean that they are immune from the serious questions the crisis raised about the methodology, assumptions and ethics of the field. It’s a case-by-case matter which areas are impacted and by how much, but any attempt to box off macroeconomic theory entirely should be resisted. There’s plenty of room for fruitful debate about all areas of economic theory, much of which will benefit from being informed by the shortcomings of economic theory as exposed by the financial crisis.
This is part 5 in my series on how the financial crisis is relevant for economics (parts 1, 2, 3 & 4 are here). Each part explores an argument economists have made against the charge that the crisis exposed fundamental failings of their discipline. This post explores the possibility that macroeconomics, even if it failed before the crisis, has responded to its critics and is moving forward.
Argument #5: “We got this one wrong, sure, but we’ve made (or are making) progress in macroeconomics, so there’s no need for a fundamental rethink.”
Many macroeconomists deserve credit for their mea culpa and subsequent refocus following the financial crisis. Nevertheless, the nature of the rethink, particularly the unwillingness to abandon certain modelling techniques and ideas, leads me to question whether progress can be made without a more fundamental upheaval. To see why, it will help to have a brief overview of how macro models work.
In macroeconomic models, the optimisation of agents means that economic outcomes such as prices, quantities, wages and rents adjust to the conditions imposed by input parameters such as preferences, technology and demographics. A consequence of this is that sustained inefficiency, unemployment and other chaotic behaviour usually occur when something ‘gets in the way’ of this adjustment. Hence economists introduce ad hoc modifications such as sticky prices, shocks and transaction costs to generate sub-optimal behaviour: for example, if a firm’s cost of changing prices exceeds the benefit, prices will not be changed and the outcome will not be Pareto efficient. Since there are countless ways in which the world ‘deviates’ from the perfectly competitive baseline, it’s mathematically troublesome (or impossible) to include every possible friction. The result is that macroeconomists tend to decide which frictions are important based on real world experience: since the crisis, the focus has been on finance. On the surface this sounds fine – who isn’t for informing our models with experience? However, it is my contention that this approach does not offer us any more understanding than would experience alone.
Perhaps an analogy will illustrate this better. I was once walking past a field of cows as it began to rain, and I noticed some of them start to sit down. It occurred to me that there was no use them doing this after the storm started; they are supposed to give us adequate warning by sitting down before it happens. Sitting down during a storm is just telling us what we already know. Similarly, although the models used by economists and policy makers did not predict and could not account for the crisis before it happened, they have since built models that try to do so. They generally do this by attributing the crisis to frictions that revealed themselves to be important during the crisis. Ex post, a friction can always be found to make models behave a certain way, but the models do not make identifying the source of problems before they happen any easier, and they don’t add much afterwards, either – we certainly didn’t need economists to tell us finance was important following 2008. In other words, when a storm comes, macroeconomists promptly sit down and declare that they’ve solved the problem of understanding storms. It would be an exaggeration to call this approach tautological, but it’s certainly not far off.
There is also the open question of whether understanding the impact of a ‘friction’ relative to a perfectly competitive baseline entails understanding its impact in the real world. As theorists from Joe Stiglitz to Yanis Varoufakis have argued, neoclassical economics is trapped in a permanent fight against indeterminacy: the quest to understand things relative to a perfectly competitive, microfounded baseline leads to aggregation problems and intractable complexities that, if included, result in “anything goes” conclusions. To put in another way, the real world is so complex and full of frictions that whichever mechanics would be driving the perfectly competitive model are swamped. The actions of individual agents are so intertwined that their aggregate behaviour cannot be predicted from each of their ‘objective functions’. Subsequently, our knowledge of the real world must be informed by either models which use different methodologies or, more crucially, by historical experience.
Finally, the ad hoc approach also contradicts another key aspect of contemporary macroeconomics: microfoundations. The typical justification for these is that, to use the words of the ECB, they impose “theoretical discipline” and are “less subject to the Lucas critique” than a simple VAR, Old Keynesian model or another more aggregative framework. Yet even if we take those propositions to be true, the modifications and frictions that are so crucial to making the models more realistic are often not microfounded, sometimes taking the form of entirely arbitrary, exogenous constraints. Even worse is when the mechanism is profoundly unrealistic, such as prices being sticky because firms are randomly unable to change them for some reason. In other words, macroeconomics starts by sacrificing realism in the name of rigour, but reality forces it in the opposite direction, and the end result is that it has neither.
Macroeconomists may well defend their approach as just a ‘story telling‘ approach, from which they can draw lessons but which isn’t meant to hold in the same manner as engineering theory. Perhaps this is defensible in itself, but (a) personally, I’d hope for better and (b) in practice, this seems to mean each economists can pick and choose whichever story they want to tell based on their prior political beliefs. If macroeconomists are content conversing in mathematical fables, they should keep these conversations to themselves and refrain from forecasting or using them to inform policy. Until then, I’ll rely on macroeconomic frameworks which are less mathematically ‘sophisticated’, but which generate ex ante predictions that cover a wide range of observations, and which do not rely on the invocation of special frictions to explain persistent deviations from these predictions.
New post on Pieria, discussing why inequality could be ethically ‘wrong':
What is inequality?
Inequality is a situation where certain people have access to things – places, goods, services – which others do not. Historically, inequalities have often been enforced by fiat, such as aristocracies and guilds, or perhaps based on group characteristics, such as apartheid or slavery. In capitalist societies, we typically use property rights to restrict peoples’ access to resources. A poor man who walks into a store and tries to take something without paying will be prevented from doing so by security or the police, while a rich man who pays will not. The same applies to private schools, expensive social clubs or fine works of art. Unless you have a sufficient number of vouchers (money), you are legally and socially restricted from access to the overwhelming majority of resources in society.
Justifying inequality therefore entails arguing why some deserve more of these vouchers, and hence greater access to places, to goods and services, to social opportunities, than others. Defenders of inequality typically rely on one of 3 ethical arguments: just deserts, voluntarism, and grow the pie. I will consider each of these arguments in turn.
As I said on twitter, the article was definitely influenced by Matt Bruenig, but for balance here’s me saying similar things quite a while ago. The point is that contemporary debate often has it backwards: it is asked why exactly we should reduce inequality, as if that is some sort of natural baseline. But if you accept that people are born equal (which most do, even if they don’t like to say it out loud), then the question is why some are more restricted from pieces of the world than others. Defenders of inequality sometimes proceed as if the 3 ethical arguments above override any other concerns.
Nate Silver’s questionable foray into predicting World Cup results got me thinking about the limitations of maths in economics (and the social sciences in general). I generally stay out of this discussion because it’s completely overdone, but I’d like to rebut a popular defence of mathematics in economics that I don’t often see challenged. It goes something like this:
Everyone has assumptions implicit in the way they view the world. Mathematics allows economists to state our assumptions clearly and make sure our conclusions follow from our premises so we can avoid fuzzy thinking.
I do not believe this argument stands on its own terms. A fuzzy concept does not become any less fuzzy when you attach an algebraic label to it and stick it into an equation with other fuzzy concepts to which you’ve attached algebraic labels (a commenter on Noah Smith’s blog provided a great example of this by mathematising Freud’s Oedipus complex and pointing out it was still nonsense). Similarly, absurd assumptions do not become any less absurd when they are stated clearly and transparently, and especially not when any actual criticism of these assumptions is brushed off the grounds that “all models are simplifications“.
Furthermore, I’m not convinced that using mathematics actually brings implicit assumptions out into the open. I can’t count the amount of times that I’ve seen people invoke demand-supply without understanding that it is built on the assumption of perfect competition (and refusing to acknowledge this point when challenged). The social world is inescapably complex, so there are an overwhelming variety of assumptions built into any type of model, theory or argument that tries to understand it. These assumptions generally remain unstated until somebody who is thinking about an issue – with or without mathematics – comes along and points out their importance.
For example, consider Michael Sandel’s point that economic theory assumes the value or characteristics of commodities are independent of their price and sale, and once you realise this is unrealistic (for example with sex), you come to different conclusions about markets. Or Robert Prasch’s point that economic theory assumes there is a price at which all commodities will be preferred to one another, which implies that at some price you’d substitute beer for your dying sister’s healthcare*. Or William Lazonick’s point that economic theory presumes labour productivity to be innate and transferable, whereas many organisations these days benefit from moulding their employees’ skills to be organisation specific. I could go on, but the point is that economic theory remains full of implicit assumptions. Understanding and modifying these is a neverending battle that mathematics does not come close to solving.
Let me stress that I am not arguing against the use of mathematics; I’m arguing against using gratuitous, bad mathematics as a substitute for interesting and relevant thinking. If we wish to use mathematics properly, it is not enough to express properties algebraically; we have to define the units in which these properties are measured. No matter how logical mathematics makes your theory appear, if the properties of key parameters are poorly defined, they will not balance mathematically and the theory will be logical nonsense. Furthermore, it has to be demonstrated that the maths is used to come to new, falsifiable conclusions, rather than rationalising things we already know. Finally, it should never be presumed that stating a theory mathematically somehow guards that theory against fuzzy thinking, poor logic or unstated assumptions. There is no reason to believe it is a priori desirable to use mathematics to state a theory or explore an issue, as some economists seem to think.
This is part 4 in my series on economics and the crisis, which asks whether economics is really responsible for policy, and if so, how these policies may have contributed to the financial crisis. Here are parts 1, 2 & 3.
Argument #4: “Mainstream economics cannot be blamed for politicians inflating housing bubbles/pursuing austerity/deregulating the financial sector; our models generally go against this. Clearly, we do not have that much influence over policy.”
This defence really raises two questions. The first is whether or not economic theory has had a major influence on policy. The second is whether or not this influence, if it exists, is culpable in creating the financial crisis.
The first is, in my opinion, easily answered in the affirmative. While it’s entirely understandable that the majority of academic economists would scoff at the idea that they effect policy, this doesn’t have to be the case for economic theory itself to hold sway among governments. After all, economics graduates are highly sought after and employed in policymaking positions. Famous economists lunch with the president; textbooks and macroeconomic papers are full of policy discussions; prize-winning economists such as Bob Shiller acknowledge that a “problem with economics is that it is necessarily focused on policy, rather than discovery of fundamentals.” It’s hard to imagine powerful institutions such as Central Banks, the World Bank or the IMF functioning with advice from any but economists, and government organisations are even set up based on new ideas coming out of economics. Economics is the language in which the media discuss policy: demand, stimulus, markets, etcetera. I could go on.
However, as economists like to remind us, there’s no reason to believe that advice based on mainstream economic theory should have led to the types of ‘free market’ policies typically implicated in the financial crisis and its aftermath. Even a basic economics education will leave you with an awareness of things like information asymmetry, moral hazard and externalities, and few economists support wanton deregulation of the financial sector. Modern macroeconomics is loosely pro-stimulus, not pro-austerity. So what’s going on?
First, it should be noted that not only ‘free market’ thinking was implicated in the crisis. Central Banks around the world used inflation targeting, based on the New Keynesian idea that this would be sufficient to achieve macroeconomic stability, which blinded them to problems brewing in the financial sector. What’s more, the approach to regulation favoured by economics was, not atypically, quite narrow and didn’t favour systemic thinking. For example, I have previously spoken about Value at Risk (VaR) regulation, which forces firms to sell off assets when markets are volatile and hence increase their insurance against risk. However, while this looks good from the perspective of individual firms, it worsens systemic risk because the asset sell-offs result in increased volatility. Overall, the reductionist nature of economic theory tended to blind policymakers to systemic problems and made them focus on the wrong variables, things they might not have done if they’d been familiar with more holistic viewpoints.
Having said this, it’s clear that at the heart of the financial crisis were lax regulatory policies, justified by a belief in the self-stabilising power of financial markets. And while a majority of individual economists may not endorse such a view, theoretical frameworks or ‘ways of thinking‘ came out of economics which were used to justify this deregulation. Whether or not efficient markets, perfect competition, rational expectations and other theories which imply financial markets will run smoothly are endorsed by most economists, the fact that they are common knowledge in economics (and usually the benchmark for more complex analysis) is significant. As I’ve argued before, familiarity with economic theory lends itself to a pro-market view, even if a lot of modern work is done pushing the core framework away from this. And as I’ve argued before, the nuances of this work are often lost in popular translation, as the elegance of the most Panglossian theories proves too tempting when economists speak to the public. Alternative theories which use different starting points for analysis, such as input-output matrices, sectoral balances, or class struggle, would help to combat the deeply ingrained nature of the neoclassical theories.
This issue does not necessarily fit into a narrow ‘government versus market’ policy perspective. Instead, the point is that acknowledging different approaches in economic theory can give us a different way of thinking about policy, illuminating rather than obfuscating debates. A key complaint about economics graduates is that they have overly narrow, abstract tools, so the enemy is not so much any particular approach as it is one sided thinking. Providing both economics students and professional economists with an awareness of different theories, as well as making economics more politically, historically and ethically engaged, would hopefully at least temper the zeal and enthusiasm with which pet policies are recommended, and partially dislodge whatever pedestal economics currently sits on as a rationale for policy.
I have a new post on Piketty on Pieria, pointing out potential problems interpreting his premises and propositions (sorry, it started organically):
I recently wrote about the numerous misconceptions over Thomas Piketty’s use and definition of capital in his book Capital in the 21st Century. Sadly, it seems there are a number of other common, equally important mischaracterisations of Piketty’s model floating around. Here I will consider 5 of the most widespread and show, using direct quotes from Piketty himself, why they are off the mark. The first 3 are simple errors of interpretation with regards to Piketty’s theoretical framework, while the latter 2 are problems with how people have responded to Piketty in general. Although the latter 2 are inevitably more subjective, they are still important for trying to understand and reframe the debate between Piketty and his critics.
Each point gives the common misinterpretation of Piketty’s work, and counters it. For example, one of the most important points (IMO) is this one:
2. ‘Fundamental laws’ of capitalism?
The claim: Piketty’s ‘fundamental laws of capitalism’ are not fundamental at all.
The reality: Although calling them ‘laws’ is misleading, at no point does Piketty claim that his laws are inviolable. They are instead tendencies (with the exception of the first law, which is just an accounting identity) which push capital’s share of income in a certain direction over time, but can be counteracted by any number of things, and only take hold over a long timespan.
Hopefully this will be a useful resource for when people who haven’t read the book (or worse, have read it but clearly either rushed or lack reading comprehension) repeat silly canards about it.
This is part 3 in my series on why and how the 2008 financial crisis is relevant to economics. The first instalment discussed why the good times during the boom are no excuse for the bad times during the bust. The second instalment discussed use of the Efficient Markets Hypothesis (EMH) to defend economists’ inability to forecast the movements of financial markets. This instalment discusses the more general proposition that crises are events whose prediction is outside the grasp of anyone, including economists.
Argument #3: “Economists aren’t oracles. Just as seismologists don’t predict earthquakes and meteorologists don’t predict the weather, we can’t be expected to predict recessions.”
This argument initially sounds quite persuasive: the economy is complex, and the future inherently unknowable, so we shouldn’t expect economists to predict the future any better than we’d expect from other analysts of complex systems. However, the argument is actually a straw man of what critics mean when they say economists didn’t foresee the recent crisis. It confuses conditional predictions of the form “if you don’t do something about x, y might happen” with oracle-esque predictions of the form “y is going to happen on December 2003″. Nobody should have expected the details of crisis – many of which were hidden – to be foreseen, and much less a prediction about exactly which banks would fail and when. Instead, what is expected is for economists to have the key indicators right and know how to deal with them, to be alert to the possibility of crisis at all times – even in seemingly tranquil periods – and to have measures in place to cushion the blow should a crisis occur.
In fact, those who study earthquakes or hurricanes do ‘predict’ them in the above sense: they understand where they’re most likely to occur (for example near fault lines), and at roughly which frequency, time and magnitude. They also have an idea of how best to combat them: areas which are prone to earthquakes and hurricanes – funding permitting – have dwellings built in such a way that they can withstand such occurrences. They understand why disasters happen, and their models tell us why they cannot be predicted. For example, it is common knowledge that weather forecasts get less accurate the further away they are due to the sensitivity of the model to initial conditions, a point based on complex mathematics but communicated well by meteorologists (not to mention that weather forecasts are improving all the time).
While there’s been a lot of kerfuffle over exactly who ‘predicted’ the crisis and what that means, the most important point is that those who did warn of a crisis like the one we’re going through identified key mechanisms (debt build up, asset price bubbles, global imbalances) and argued that, unless these processes were combated, we’d be in danger. I appreciate that the ‘stopped clock’ problem really is a problem: there are so many people predicting crises that eventually, one of them will seem to be right. However, this is easily countered by using the same framework to make predictions outside the crisis (predictions in the general sense of the word, not just about the future). For example, Peter Schiff predicted a financial crisis quite a lot like the one we’ve been through, but he also predicted hyperinflation, suggesting that his model is wrong in some way. Conversely, endogenous money models are consistent with both the financial crisis and the subsequent weak effects of monetary stimulus: since money is created as debt, private debt can have major effects on the economy, and since banks do not lend based on reserves, there’s no reason for an increased monetary base do produce inflation.
Finally, while natural disasters are almost entirely exogenous phenomena, the economy is a social system, so we have a degree of control over it, both individually and collectively. It’s perhaps a testament to how the neoclassical approach naturalises the economic system that some economists feel recessions can be compared to natural disasters (not that this would mean they had no responsibility for alleviating their effects). Since economic models are frequently used to inform government policy, it’s quite clear that economists appreciate this point; however, since they often admit they don’t really understand what causes recessions, they are doing the equivalent of sending us up in toy planes. It’s fair to say that you don’t fully understand the economy; it’s quite another thing to say this, then recommend ways to manage it. But the relationship between economists and policy is a matter for the next part of the series.
The next instalment will be part 4: masters of the universe?
I have a new post on Pieria, where I finally get round to commenting on Thomas Piketty’s Capital in the 21st Century. My focus is on capital itself, how Piketty defines this and whether or not critics such as Jamie Galbraith are right to attack him for his choice of definition:
An important but perhaps under-discussed aspect of Thomas Piketty’s Capital in the 21st Century is Piketty’s definition of capital itself, and the implications this has for his thesis and its critics. Capital is a notoriously tricky to define concept, and many have taken issue with Piketty’s definition and the framework he builds around it. Typically, the implication is that a more Correct understanding of capital leads to vastly different conclusions to Piketty’s, especially with regards to his conclusions on inequality.
The verdict is that Piketty’s definition of capital is a lot more nuanced than critics make out, and typically (though not always) their critique just reflects a pet peeve of theirs, whether this is human capital, the CCCs or what have you. It’s not that Piketty’s definition is ‘correct’, or that it chimes well with other historical usages of the term (such as Marx’s); it’s merely that Piketty’s own definition is sufficient for showing what he wants to show: the dynamics of inequality under capitalism.
I’m also not really sure about Paul Krugman’s contention that Piketty “relies mainly on conventional, mainstream economics” – sure, he uses some mainstream concepts, but begrudgingly, and only as one angle of support for his broader historical, political and statistical analysis. This analysis stands or falls apart from frameworks like the production function, marginal productivity theory or the Solow Growth Model, even if some economists are eager to interpret it entirely within such frameworks. The fact is that while Piketty’s work cannot be construed as purely ‘heterodox’ or ‘mainstream’, it’s definitely far closer to how economics should look in the future: holistic, empirical, and using mathematics only when needed. Hopefully economists of all stripes can recognise this instead of focusing too much on unimportant details.