I have a new post on Piketty on Pieria, pointing out potential problems interpreting his premises and propositions (sorry, it started organically):
I recently wrote about the numerous misconceptions over Thomas Piketty’s use and definition of capital in his book Capital in the 21st Century. Sadly, it seems there are a number of other common, equally important mischaracterisations of Piketty’s model floating around. Here I will consider 5 of the most widespread and show, using direct quotes from Piketty himself, why they are off the mark. The first 3 are simple errors of interpretation with regards to Piketty’s theoretical framework, while the latter 2 are problems with how people have responded to Piketty in general. Although the latter 2 are inevitably more subjective, they are still important for trying to understand and reframe the debate between Piketty and his critics.
Each point gives the common misinterpretation of Piketty’s work, and counters it. For example, one of the most important points (IMO) is this one:
2. ‘Fundamental laws’ of capitalism?
The claim: Piketty’s ‘fundamental laws of capitalism’ are not fundamental at all.
The reality: Although calling them ‘laws’ is misleading, at no point does Piketty claim that his laws are inviolable. They are instead tendencies (with the exception of the first law, which is just an accounting identity) which push capital’s share of income in a certain direction over time, but can be counteracted by any number of things, and only take hold over a long timespan.
Hopefully this will be a useful resource for when people who haven’t read the book (or worse, have read it but clearly either rushed or lack reading comprehension) repeat silly canards about it.
This is part 3 in my series on why and how the 2008 financial crisis is relevant to economics. The first instalment discussed why the good times during the boom are no excuse for the bad times during the bust. The second instalment discussed use of the Efficient Markets Hypothesis (EMH) to defend economists’ inability to forecast the movements of financial markets. This instalment discusses the more general proposition that crises are events whose prediction is outside the grasp of anyone, including economists.
Argument #3: “Economists aren’t oracles. Just as seismologists don’t predict earthquakes and meteorologists don’t predict the weather, we can’t be expected to predict recessions.”
This argument initially sounds quite persuasive: the economy is complex, and the future inherently unknowable, so we shouldn’t expect economists to predict the future any better than we’d expect from other analysts of complex systems. However, the argument is actually a straw man of what critics mean when they say economists didn’t foresee the recent crisis. It confuses conditional predictions of the form “if you don’t do something about x, y might happen” with oracle-esque predictions of the form “y is going to happen on December 2003”. Nobody should have expected the details of crisis – many of which were hidden – to be foreseen, and much less a prediction about exactly which banks would fail and when. Instead, what is expected is for economists to have the key indicators right and know how to deal with them, to be alert to the possibility of crisis at all times – even in seemingly tranquil periods – and to have measures in place to cushion the blow should a crisis occur.
In fact, those who study earthquakes or hurricanes do ‘predict’ them in the above sense: they understand where they’re most likely to occur (for example near fault lines), and at roughly which frequency, time and magnitude. They also have an idea of how best to combat them: areas which are prone to earthquakes and hurricanes – funding permitting – have dwellings built in such a way that they can withstand such occurrences. They understand why disasters happen, and their models tell us why they cannot be predicted. For example, it is common knowledge that weather forecasts get less accurate the further away they are due to the sensitivity of the model to initial conditions, a point based on complex mathematics but communicated well by meteorologists (not to mention that weather forecasts are improving all the time).
While there’s been a lot of kerfuffle over exactly who ‘predicted’ the crisis and what that means, the most important point is that those who did warn of a crisis like the one we’re going through identified key mechanisms (debt build up, asset price bubbles, global imbalances) and argued that, unless these processes were combated, we’d be in danger. I appreciate that the ‘stopped clock’ problem really is a problem: there are so many people predicting crises that eventually, one of them will seem to be right. However, this is easily countered by using the same framework to make predictions outside the crisis (predictions in the general sense of the word, not just about the future). For example, Peter Schiff predicted a financial crisis quite a lot like the one we’ve been through, but he also predicted hyperinflation, suggesting that his model is wrong in some way. Conversely, endogenous money models are consistent with both the financial crisis and the subsequent weak effects of monetary stimulus: since money is created as debt, private debt can have major effects on the economy, and since banks do not lend based on reserves, there’s no reason for an increased monetary base do produce inflation.
Finally, while natural disasters are almost entirely exogenous phenomena, the economy is a social system, so we have a degree of control over it, both individually and collectively. It’s perhaps a testament to how the neoclassical approach naturalises the economic system that some economists feel recessions can be compared to natural disasters (not that this would mean they had no responsibility for alleviating their effects). Since economic models are frequently used to inform government policy, it’s quite clear that economists appreciate this point; however, since they often admit they don’t really understand what causes recessions, they are doing the equivalent of sending us up in toy planes. It’s fair to say that you don’t fully understand the economy; it’s quite another thing to say this, then recommend ways to manage it. But the relationship between economists and policy is a matter for the next part of the series.
The next instalment will be part 4: masters of the universe?
I have a new post on Pieria, where I finally get round to commenting on Thomas Piketty’s Capital in the 21st Century. My focus is on capital itself, how Piketty defines this and whether or not critics such as Jamie Galbraith are right to attack him for his choice of definition:
An important but perhaps under-discussed aspect of Thomas Piketty’s Capital in the 21st Century is Piketty’s definition of capital itself, and the implications this has for his thesis and its critics. Capital is a notoriously tricky to define concept, and many have taken issue with Piketty’s definition and the framework he builds around it. Typically, the implication is that a more Correct understanding of capital leads to vastly different conclusions to Piketty’s, especially with regards to his conclusions on inequality.
The verdict is that Piketty’s definition of capital is a lot more nuanced than critics make out, and typically (though not always) their critique just reflects a pet peeve of theirs, whether this is human capital, the CCCs or what have you. It’s not that Piketty’s definition is ‘correct’, or that it chimes well with other historical usages of the term (such as Marx’s); it’s merely that Piketty’s own definition is sufficient for showing what he wants to show: the dynamics of inequality under capitalism.
I’m also not really sure about Paul Krugman’s contention that Piketty “relies mainly on conventional, mainstream economics” – sure, he uses some mainstream concepts, but begrudgingly, and only as one angle of support for his broader historical, political and statistical analysis. This analysis stands or falls apart from frameworks like the production function, marginal productivity theory or the Solow Growth Model, even if some economists are eager to interpret it entirely within such frameworks. The fact is that while Piketty’s work cannot be construed as purely ‘heterodox’ or ‘mainstream’, it’s definitely far closer to how economics should look in the future: holistic, empirical, and using mathematics only when needed. Hopefully economists of all stripes can recognise this instead of focusing too much on unimportant details.
This is part 2 in my series on why and how the 2008 financial crisis is relevant to economics. The first instalment discussed why the good times during the boom are no excuse for the bad times during the bust. This instalment discusses the use of the Efficient Markets Hypothesis (EMH) to defend economists’ inability to forecast the movements of financial markets, hereafter referred to as the ‘EMH-twist’.
Argument #2: ““The EMH claims that crises are unpredictable, so the fact that economists didn’t predict the crisis is not a problem for economics at all.”
As far as I’m aware, this argument was first used by John Cochrane, and it has reappeared multiple times since then: for example, it was more recently referenced by Andrew Lilco, who was sadly echoed by the generally infallible Chris Dillow. The idea is that financial markets process new information faster than any one individual, government or institution could, and so for most people they may seem to behave unpredictably. However, economists can not be expected to understand these sudden movements better than anyone else, so expecting them to foresee market crashes is absurd. As Cochrane puts it, “it makes no sense whatsoever to try to discredit efﬁcient market theory in ﬁnance because its followers didn’t see the crash coming”.
However, this logic is completely circular. The mere fact that a theory exists which claims crises are unpredictable does not mean that, if a crisis is not predicted – particularly by the proponents of said theory – this shows the theory is correct. If the EMH had, to the best of our knowledge, been shown to be correct, then the EMH-twist might hold some water, but we must establish this truth separately from the the fact its proponents didn’t predict the crisis (David Glasner recently made a similar point about the ubiquitous use of rational expectations in macroeconomics). While Cochrane does claim that the central tenet of the EMH “is probably the best-tested proposition in all the social sciences”, he fails to reference supporting evidence, and in fact goes on to add substantial qualifications to the empirical record of the EMH, admitting that market volatility might happen “because people are prey to bursts of irrational optimism and pessimism”.
It is not necessarily my aim to establish the truth or falsity of the EMH here: it has been discussed extensively elsewhere. However, there are a couple of key tests for whether or not it applies to 2008. The first is whether or not anybody – both adherents and detractors of the theory – foresaw the crisis. While the EMH claims nobody could, this is clearly wrong: some people in finance made a lot of money; some economists not only called it but had frameworks that explained it well once it happened; quite a few people (even mainstream economists) at least noted the existence of a housing bubble. The EMH can attribute these predictions to simple luck, but now we’re back to circularity: assume the EMH is true, then appeal to it to rationalise any possible market movement. The second test of the EMH, since it depends on new information to trigger volatility, is to ask exactly what new information became available just before the crash. However, the financial instruments key to 2008 were used by investment banks for a good few years prior to the crash, so it’s quite difficult to claim that new information about these suddenly became available in 2007-8. Instead, what happened was a collective realisation that everyone knew very little about the products they’d been trading, resulting in a classic panic.
In fairness, there is an element of truth to the EMH-twist. Financial markets are incredibly difficult to understand, and the argument that economists don’t yet understand them, along with a mea culpa, might be acceptable – there are many things natural scientists still don’t understand, such as dark matter, or what happened ‘before’ the big bang. However, the EMH-twist as used by Cochrane et al is phrased more strongly: it is the assertion that economists can’t and shouldn’t understand the movements of financial markets, simply because the EMH allows them to wash their hands of the task. We wouldn’t accept this kind of attitude from any other field, so I can’t help but feel Cochrane’s claim that “the economist’s job is not to ‘explain’ market ﬂuctuations after the fact” can only be met with: “then what is the economists’ job, exactly?”
The next instalment in the series will be part 3: econoracles.
For critics of mainstream economics, the 2008 financial crisis represents the final nail in the coffin for a paradigm that should have died decades ago. Not only did economists fail to see it coming, they can’t agree on how to get past it and they have yet to produce a model that can understand it fully. On the other hand, economists tend to see things quite differently – in my experience, your average economist will concede that although the crisis is a challenge, it’s a challenge that has limited implications for the field as a whole. Some go even further and argue that it is all but irrelevant, whether due to progress being made in the field or because the crisis represents a fundamentally unforeseeable event in a complex world.
I have been compiling the most common lines used to defend economic theory after the crisis, and will consider each of them in turn in a series of 7 short posts (it was originally going to be one long post, but it got too long). I’ve started with what I consider the weakest argument, with the quality increasing as the series goes on. Hopefully this will be a useful resource to further debate and prevent heterodox and mainstream economists (and the public) talking past each other. Let me note that I do not intend these arguments as simple ‘rebuttals’ of every point (though it is of some, especially the weaker ones), but as a cumulative critique. Neither am I accusing all economists of endorsing all of the arguments presented here (especially the weaker ones).
Argument #1: “We did a great job in the boom!”
I’ve seen this argument floating around, and it actually takes two forms. The first, most infamously used by Alan Greenspan – and subsequently mocked by bloggers – is a political defense of boom-bust, or even capitalism itself: the crisis, and others like it, are just noise around a general trend of progression, and we should be thankful for this progression instead of focusing on such minor hiccups. The second form is more of a defence of economic theory: since the theory does a good job of explaining/predicting the boom periods, which apply most of the time, it’s at least partially absolved of failing to ‘predict’ the behaviour of the economy. Both forms of the argument suffer from the same problems.
First, something which is expected to do a certain job – whether it’s an economic system or the economists who study it – is expected to do this job all the time. If an engineer designs a bridge, you don’t expect it to stand up most of the time. If your partner promises to be faithful, you don’t expect them to do so most of the time. If your stock broker promises to make money but loses it after an asset bubble bursts, you won’t be comforted by the fact that they were making money before the bubble burst. And if an economic system, or set of policies, promise to deliver stability, employment and growth, then the fact that it fails to do so every 7 years means that it is not achieving its stated objectives. In other words, the “invisible hand” cannot be acquitted of the charge of failing to do its job by arguing it only fails to do its job every so often.
Second, the argument implies there was no causal link between the boom and the bust, so the stable period can be understood as separate from the unstable period. Yet if the boom and the bust are caused by the same process, then understanding one entails understanding the other. In this case, the same webs of credit which fuelled the boom created enormous problems once the bubble burst and people found their incomes scarce relative to their accumulated debts. Models which failed to spot this process in its first phase inevitably missed (and misdiagnosed) the second phase. As above, the job of macroeconomic models is to understand the economy, which entails understanding it at all times, not just when nothing is going wrong – which is when we need them least.
As a final note, I can’t help but wonder if this argument, even in its general political form, has roots in economic theory. Economic models (such as the Solow Growth Model) often treat the boom as the ‘underlying’ trend, buffeted only by exogenous shocks or slowed/stopped by frictions. A lot of the major macroeconomic frameworks (such as Infinite Horizons or Overlapping Generations models) have two main possibilities: a steady-state equilibrium path, or complete breakdown. In other words, either things are going well or they aren’t – and if they aren’t, it’s usually because of an easily identifiable mechanism, one which constitutes a “notably rare exception” to the underlying mechanics of the model. Such a mentality implies problems, including recessions, are not of major analytical interest, or are at least easily diagnosed and remedied by a well-targeted policy. Subsequently, those versed in economic theory may have trouble envisaging a more complex process, whereby a seemingly tranquil period can contain the seeds of its own demise. This causes a mental separation of the boom and the bust periods, resulting in a failure to deal with either.
The next instalment in the series will be part 2: the EMH-twist
Recent posts by Noah Smith, David Henderson and Daniel Kuehn on the relationship between economics, ‘free markets’ and policy in general got me thinking about libertarians and how accepting they should be of marginalist economics, as well as how open they should be to non-marginalist alternatives. It seems to me there is an unspoken bond between marginalist economics and libertarianism (even Austrianism shares some major features with neoclassical economics), and so there may be a tendency for libertarians to have strong priors against post-Keynesian, Sraffian, Marxist, Behavioural, Ecological and other types of economics that dispute this general framework.
Let me note that I’m not accusing libertarians of being generally hostile to heterodox economics – I’m sure there are some who are and some who aren’t. Instead, I’m just warning against such a possibility, and offering some heterodox ideas to which libertarians might be receptive.
Behavioural/post-Keynesian consumer theory: behavioural economics sometimes elicits rebukes from libertarians, as it seems to imply that ordinary people are not able to make decisions rationally, and therefore that policy makers should help them along their way. Naturally, libertarians object to this idea, questioning the experimental methods of behavioural economics, pointing out that policy makers are themselves imperfect, and so forth. I’m not going to comment on the efficacy of these arguments here – sometimes they are fair, sometimes less so. Instead, what I want to point out is that while some behavioural economics implies a role for activist policy, it’s not necessarily the case that a view of consumers which differs from the optimising agent renders the agent somehow irrational and therefore ripe for intervention.
One such example is a version of the mental accounting model, used in post-Keynesian consumer theory, in which consumers organise their budget into categories before making spending decisions. Consumers will not spend money in one category until they have had their needs in a more ‘basic’ or ‘fundamental’ category satisfied, which creates a Maslow-esque hierarchy of spending – starting with necessities such as food & shelter and culminating in yachts & modern art. This means relative price changes do not have as much of an impact on the type of goods bought as implied by the utility maximising model; instead, the amount spent on different types of goods is primarily determined by the consumer’s level of income.
On first inspection, this might seem to imply a tirade against the efficacy of the price system for coordinating preferences and scarcity, as well as a comment on the ‘wastefulness’ of inequality (and perhaps it could be interpreted as such). However, this doesn’t necessarily make the theory generally ‘anti-libertarian’. In fact, one major implication is that placing high taxes on something low in someone’s hierarchy will not have much impact on their spending, and hence ‘sin taxes’ – which are a major expense for the poor – will not reduce their consumption of alcohol/smoking substantially; instead, these things will simply take up more and more of their income (which is pretty consistent with available evidence). This implies that paternalistic tax policies aimed at the poor will generally fail to achieve their aims.
The Market for Lemons (TML): George Akerlof’s famous paper explored information asymmetry, using used car markets as its primary example. Akerlof was trying to understand what buyers do when they face a product of unknown quality, and argued that since they are unsure, they will only be willing to bid the average expected value of a car in the market. However, if the seller is selling a ‘good’ car, its value will be above this average, so the seller will not sell it at the price the buyer offers. The result is that the best sellers drop out of the market, creating a cumulative process which results in the market unravelling completely.
Though theoretically neat and compelling, this ‘seminal’ example of market failure has always struck me as incredibly weak, for the simple reason that used car markets do not actually fall apart. Why? Maybe people aren’t rational maximisers etc etc (for example, in another nod to behavioural economics, it may be that buyers’ irrational overconfidence leads them to go ahead with a purchase, even if it’s statistically likely they’ll get a ‘lemon’). Ultimately, though, I’d argue the answer is that capitalism – or if you prefer, ‘the market’ – is a network of historically contingent institutions and social interactions, rather than abstract individuals trading in a vacuum where outcomes are mathematically knowable. The reason used car markets work ‘despite’ information asymmetry is due to hard-to-establish trust and norms between buyers and sellers, and due to intermediaries such as auto trader, who spring up to help both sides avoid being ripped off. I’ve not seen anyone provide an example of the process TML outlines actually occurring, so I don’t see why it adds to our understanding of markets.
To be fair to Austrians, they have been talking about ‘the market as a social process’ for a long time, and in places have disputed the Lemons Model on similar grounds to the above. Hence they have something in common with old institutionalists, Marxists (to a degree) and perhaps even hard-to-place heterodox economists like Tony Lawson, who argues economics should primarily be a historical, rather than mathematical subject. To put it another way, while heterodox economists typically advocate a move away from marginalist economics to understand why capitalism doesn’t work, such a move may also be necessarily to understand why it does.
Mark up Pricing: Post-Keynesian, Sraffian and Institutionalist economics typically subscribe to the cost-plus theory of prices, which states that businesses set prices at their average cost per unit, plus a mark up. Furthermore, they avoid price changes where possible, preferring to keep their prices stable for long periods of time to yield a target rate of profit, varying quantity rather than price, and keeping spare capacity and stocks so that they can do so. The problem libertarians might have with this is that it implies prices are somewhat arbitrary, do not usually ‘clear’ markets, and do not adjust to the preferences of consumers especially smoothly. However, while these things may be true, they do not mean mark up pricing comes with no benefits.
In my opinion, one such benefit is stability: I’m glad I can rely on prices only changing every so often, and that if there are a lot of people at the hairdressers he doesn’t raise the price to ‘clear’ the market. Furthermore, the fact that firms keep buffer stocks and can adjust quantity instead of price allows them to deal with uncertainty and unexpected demand more easily, making them more adaptable to real world conditions than if they always squeezed every drop out of their existing capacity. While I’m not going to pretend post-Keynesian pricing theory doesn’t imply some anti-libertarian policies (particularly with regards to price regulation), but it’s certainly not a one sided idea, and its policy implications are open to further interpretation.
I generally prefer to refrain from immediately linking everything to policy as I have done above, because, well, there are enough people doing that. However, the examples I’ve given actually help to demonstrate a point about the relationship between economic analysis and policy: theories with premises that seem to imply a certain policy may not once you’ve followed them through to their conclusions. What’s more, the same analysis can seem to imply different policies from different perspectives (at its most extreme, Austrian Business Cycle Theory seems to imply that even a teensy regulation will send capitalism off the rails, which could be interpreted as a damning criticism if you were a leftist). This means calls for pluralism in economics should be embraced by all, even if on the surface some ‘alternatives’ to mainstream economics seem to conflict with one’s world view.
I have new post on Pieria, following up on mainstream macro and secular stagnation. The beginning is a restatement of my critique of EM/a response to Simon Wren-Lewis, but the main nub of the post is (hopefully) a more constructive effort at macroeconomics, from a heterodox perspective:
There are two major heterodox theories which help to understand both the 2008 crisis and the so-called period of ‘secular stagnation’ before and after it happened: Karl Marx’s Tendency of the Rate of Profit to Fall (TRPF), and Hyman Minsky’s Financial Instability Hypothesis (FIH). I expect that neither of these would qualify as ‘precise’ or ‘rigorous’ enough for mainstream economists – and I’ve no doubt the mere mention of Marx will have some reaching for the Black Book of Communism – but the models are relatively simple, offer an understanding of key mechanisms and also make empirically testable predictions. What’s more, they do not merely isolate abstract mechanisms, but form a general explanation of the trends in the global economy over the past few decades (both individually, but even moreso when combined). Marx’s declining RoP serves as a material underpinning for why secular stagnation and financialisation get started, while Minsky’s FIH offers an excellent description of how they evolve.
I have two points that I wanted to add, but thought they would clog up the main post:
First, in my previous post, I referenced Stock-Flow Consistent models as one promising future avenue for fully-fledged macroeconomic modelling, a successor to DSGE. Other candidates might include Agent-Based Modelling, models in econophysics or Steve Keen’s systems dynamics approach. However, let me say that – as far as I’m aware – none of these approaches yet reach the kind of level I’m asking of them. I endorse them on the basis that they have more realistic foundations, and have had fewer intellectual resources poured into them than macroeconomic models, so they warrant further exploration. But for now, I believe macroeconomics should walk before it can run: clearly stated, falsifiable theories, which lean on maths where needed but do not insist on using it no matter what, are better than elaborate, precisely stated theories which are so abstract it’s hard to determine how they are relevant at all, let alone falsify them.
Second, these are just two examples, coloured no doubt by my affiliation with what you might call left-heterodox schools of thought. However, I’m sure Austrian economics is quite compatible with the idea of secular stagnation, since their theory centres around how credit expansion and/or low interest rates cause a misallocation of investment, resulting in unsustainable bubbles. I leave it to those more knowledgeable about Austrian economics than me to explore this in detail.
A frustrating recurrence for critics of ‘mainstream’ economics is the assertion that they are criticising the economics of bygone days: that those phenomena which they assert economists do not consider are, in fact, at the forefront of economics research, and that the critics’ ignorance demonstrates that they are out of touch with modern economics – and therefore not fit to criticise it at all.
Nowhere is this more apparent than with macroeconomics. Macroeconomists are commonly accused of failing to incorporate dynamics in the financial sector such as debt, bubbles and even banks themselves, but while this was true pre-crisis, many contemporary macroeconomic models do attempt to include such things. Reputed economist Thomas Sargent charged that such criticisms “reflect either woeful ignorance or intentional disregard for what much of modern macroeconomics is about and what it has accomplished.” So what has it accomplished? One attempt to model the ongoing crisis using modern macro is this recent paper by Gauti Eggertsson & Neil Mehrotra, which tries to understand secular stagnation within a typical ‘overlapping generations’ framework. It’s quite a simple model, deliberately so, but it helps to illustrate the troubles faced by contemporary macroeconomics.
The model has only 3 types of agents: young, middle-aged and old. The young borrow from the middle, who receive an income, some of which they save for old age. Predictably, the model employs all the standard techniques that heterodox economists love to hate, such as utility maximisation and perfect foresight. However, the interesting mechanics here are not in these; instead, what concerns me is the way ‘secular stagnation’ itself is introduced. In the model, the limit to how much young agents are allowed to borrow is exogenously imposed, and deleveraging/a financial crisis begins when this amount falls for unspecified reasons. In other words, in order to analyse deleveraging, Eggertson & Mehrotra simply assume that it happens, without asking why. As David Beckworth noted on twitter, this is simply assuming what you want to prove. (They go on to show similar effects can occur due to a fall in population growth or an increase in inequality, but again, these changes are modelled as exogenous).
It gets worse. Recall that the idea of secular stagnation is, at heart, a story about how over the last few decades we have not been able to create enough demand with ‘real’ investment, and have subsequently relied on speculative bubbles to push demand to an acceptable level. This was certainly the angle from which Larry Summers and subsequent commentators approached the issue. It’s therefore surprising – ridiculous, in fact – that this model of secular stagnation doesn’t include banks, and has only one financial instrument: a risk-less bond that agents use to transfer wealth between generations. What’s more, as the authors state, “no aggregate savings is possible (i.e. there is no capital)”. Yes, you read that right. How on earth can our model understand why there is not enough ‘traditional’ investment (i.e. capital formation), and why we need bubbles to fill that gap, if we can have neither investment nor bubbles?
Naturally, none of these shortcomings stop Eggertson & Mehrotra from proceeding, and ending the paper in economists’ favourite way…policy prescriptions! Yes, despite the fact that this model is not only unrealistic but quite clearly unfit for purpose on its own terms, and despite the fact that it has yielded no falsifiable predictions (?), the authors go on give policy advice about redistribution, monetary and fiscal policy. Considering this paper is incomprehensible to most of the public, one is forced to wonder to whom this policy advice is accountable. Note that I am not implying policymakers are puppets on the strings of macroeconomists, but things like this definitely contribute to debate – after all, secular stagnation was referenced by the Chancellor in UK parliament (though admittedly he did reject it). Furthermore, when you have economists with a platform like Paul Krugman endorsing the model, it’s hard to argue that it couldn’t have at least some degree of influence on policy-makers.
Now, I don’t want to make general comments solely on the basis of this paper: after all, the authors themselves admit it is only a starting point. However, some of the problems I’ve highlighted here are not uncommon in macro: a small number of agents on whom some rather arbitrary assumptions are imposed to create loosely realistic mechanics, an unexplained ‘shock’ used to create a crisis. This is true of the earlier, similar paper by Eggertson & Krugman, which tries to model debt-deflation using two types of agents: ‘patient’ agents, who save, and ‘impatient agents’, who borrow. Once more, deleveraging begins when the exogenously imposed constraint on the patient agent’s borrowing falls For Some Reason, and differences in the agents’ respective consumption levels reduce aggregate demand as the debt is paid back. Again, there are no banks, no investment and no real financial sector. Similarly, even the far more sophisticated Markus K. Brunnermeier & Yuliy Sannikov – which actually includes investment and a financial sector – still only has two agents, and relies on exogenous shocks to drive the economy away from its steady-state.
Why do so many models seem to share these characteristics? Well, perhaps thanks to the Lucas Critique, macroeconomic models must be built up from optimising agents. Since modelling human behaviour is inconceivably complex, mathematical tractability forces economists to make important parameters exogenous, and to limit the number (or number of types) of agents in the model, as well as these agents’ goals & motivations. Complicated utility functions which allow for fairly common properties like relative status effects, or different levels of risk aversion at different incomes, may be possible to explore in isolation, but they’re not generalisable to every case or the models become impossible to solve/indeterminate. The result is that a model which tries to explore something like secular stagnation can end up being highly stylised, to the point of missing the most important mechanics altogether. It will also be unable to incorporate other well-known developments from elsewhere in the field.
This is why I’d prefer something like Stock-Flow Consistent models, which focus on accounting relations and flows of funds, to be the norm in macroeconomics. As economists know all too well, all models abstract from some things, and when we are talking about big, systemic problems, it’s not particularly important whether Maria’s level of consumption is satisfying a utility function. What’s important is how money and resources move around: where they come from, and how they are split – on aggregate – between investment, consumption, financial speculation and so forth. This type of methodology can help understand how the financial sector might create bubbles; or why deficits grow and shrink; or how government expenditure impacts investment. What’s more, it will help us understand all of these aspects of the economy at the same time. We will not have an overwhelming number of models, each highlighting one particular mechanic, with no ex ante way of selecting between them, but one or a small number of generalisable models which can account for a large number of important phenomena.
Finally, to return to the opening paragraph, this paper may help to illustrate a lesson for both economists and their critics. The problem is not that economists are not aware of or never try to model issue x, y or z. Instead, it’s that when they do consider x, y or z, they do so in an inappropriate way, shoehorning problems into a reductionist, marginalist framework, and likely making some of the most important working parts exogenous. For example, while critics might charge that economists ignore mark-up pricing, the real problem is that when economists do include mark-up pricing, the mark-up is over marginal rather than average cost, which is not what firms actually do. While critics might charge that economists pay insufficient attention to institutions, a more accurate critique is that when economists include institutions, they are generally considered as exogenous costs or constraints, without any two-way interaction between agents and institutions. While it’s unfair to say economists have not done work that relaxes rational expectations, the way they do so still leaves agents pretty damn rational by most peoples’ standards. And so on.
However, the specific examples are not important. It seems increasingly clear that economists’ methodology, while it is at least superficially capable of including everything from behavioural economics to culture to finance, severely limits their ability to engage with certain types of questions. If you want to understand the impact of a small labour market reform, or how auctions work, or design a new market, existing economic theory (and econometrics) is the place to go. On the other hand, if you want to understand development, historical analysis has a lot more to offer than abstract theory. If you want to understand how firms work, you’re better off with survey evidence and case studies (in fairness, economists themselves have been moving some way in this direction with Industrial Organisation, although if you ask me oligopoly theory has many of the same problems as macro) than marginalism. And if you want to understand macroeconomics and finance, you have to abandon the obsession with individual agents and zoom out to look at the bigger picture. Otherwise you’ll just end up with an extremely narrow model that proves little except its own existence.
I’ve recently been re-reading John Maynard Keynes’ The General Theory (TGT), along with some other tweeps, and thought I’d collect up quotes which struck me as particularly insightful. Obviously, there many such quotes in TGT, some of them quite well-known, so I’ve opted for ones you don’t see reproduced as much, and which those have not fully read TGT may not have seen before.
As an aside: I don’t know why TGT has such a reputation for being difficult to read. There are surely some difficult sections: chapter 6, the list of points on Say’s Law, the fact that Keynes insists on describing diagrams instead of just bloody drawing them. But the rest is merely a mixture of: well-known economic theories, expressed verbally; passages of (wonderful) intuitive observatory prose that even someone with no economics training could understand; basic concepts and ideas which Keynes introduces (like liquidity preference), some of which may require mulling over but none of which are particularly taxing. My hunch is that those who complain that they can’t understand it simply set out not to understand it in the first place, and are all the poorer for it.
Anyway, onto the quotes. After inquiring on Twitter, I’ve decided to retain the length of the quotes, but I’ve bolded what I see as the absolutely crucial parts.
1. In Chapter 4, in a passage about how to measure depreciation, Keynes speaks about the aggregation of capital and seems to touch on some of the points raised much later on in the Cambridge Capital Controversies:
The difficulty is even greater when, in order to calculate net output, we try to measure the net addition to capital equipment; for we have to find some basis for a quantitative comparison between the new items of equipment produced during the period and the old items which have perished by wastage. In order to arrive at the net National Dividend, Professor Pigou deducts such obsolescence, etc., “as may fairly be called ‘normal’; and the practical test of normality is that the depletion is sufficiently regular to be foreseen, if not in detail, at least in the large.” But, since this deduction is not a deduction in terms of money, he is involved in assuming that there can be a change in physical quantity, although there has been no physical change; i.e. he is covertly introducing changes in value. Moreover, he is unable to devise any satisfactory formula to evaluate new equipment against old when, owing to changes in technique, the two are not identical. I believe that the concept at which Professor Pigou is aiming is the right and appropriate concept for economic analysis. But, until a satisfactory system of units has been adopted, its precise definition is an impossible task. The problem of comparing one real output with another and of then calculating net output by setting off new items of equipment against the wastage of old items presents conundrums which permit, one can confidently say, of no solution.
Clearly, these arguments about capital had been floating around for some time before they came to a head in the 1950s/60s – in Chapter 11, Keynes notes that even Alfred Marshall was aware of them. Then, in Chapter 14, Keynes explicitly states the point that you cannot measure the ‘productivity’ of capital independent of its price:
Nor are those theories more successful which attempt to make the rate of interest depend on “the marginal efficiency of capital”. It is true that in equilibrium the rate of interest will be equal to the marginal efficiency of capital, since it will be profitable to increase (or decrease) the current scale of investment until the point of equality has been reached. But to make this into a theory of the rate of interest or to derive the rate of interest from it involves a circular argument, as Marshall discovered after he had got half-way into giving an account of the rate of interest along these lines. For the “marginal efficiency of capital” partly depends on the scale of current investment, and we must already know the rate of interest before we can calculate what this scale will be. The significant conclusion is that the output of new investment will be pushed to the point at which the marginal efficiency of capital becomes equal to the rate of interest; and what the schedule of the marginal efficiency of capital tells us, is, not what the rate of interest is, but the point to which the output of new investment will be pushed, given the rate of interest.
Clearly, this was part of Keynes’ reason for formulating a theory of the rate of interest independent of considerations about productivity, time-preference and so forth.
The equivalence between the quantity of saving and the quantity of investment emerges from the bilateral character of the transactions between the producer on the one hand and, on the other hand, the consumer or the purchaser of capital equipment.Income is created by the value in excess of user cost which the producer obtains for the output he has sold; but the whole of this output must obviously have been sold either to a consumer or to another entrepreneur; and each entrepreneur’s current investment is equal to the excess of the equipment which he has purchased from other entrepreneurs over his own user cost. Hence, in the aggregate the excess of income over consumption, which we call saving, cannot differ from the addition to capital equipment which we call investment. And similarly with net saving and net investment. Saving, in fact, is a mere residual. The decisions to consume and the decisions to invest between them determine incomes. Assuming that the decisions to invest become effective, they must in doing so either curtail consumption or expand income. Thus the act of investment in itself cannot help causing the residual or margin, which we call saving, to increase by a corresponding amount.
3. In Chapter 7, Keynes offers an argument against the Hayekian Natural Rate of Interest. This is not a comprehensive critique, but it sums up my thoughts on ABCT quite adequately: the naturalistic fallacy, along with implicit appeals to neoclassical equilibrium concepts, lurk in the background and leave some crucial points vague or undefined:
Thus “forced saving” has no meaning until we have specified some standard rate of saving. If we select (as might be reasonable) the rate of saying which corresponds to an established state of full employment, the above definition would become: “Forced saving is the excess of actual saving over what would be saved if there were full employment in a position of long-period equilibrium”. This definition would make good sense, but a sense in which a forced excess of saving would be a very rare and a very unstable phenomenon, and a forced deficiency of saving the usual state of affairs.Professor Hayek’s interesting “Note on the Development of the Doctrine of Forced Saving” shows that this was in fact the original meaning of the term. “Forced saving” or “forced frugality” was, in the first instance, a conception of Bentham’s; and Bentham expressly stated that he had in mind the consequences of an increase in the quantity of money (relatively to the quantity of things vendible for money) in circumstances of “all hands being employed and employed in the most advantageous manner”. In such circumstances, Bentham points out, real income cannot be increased, and, consequently, additional investment, taking place as a result of the transition, involves forced frugality “at the expense of national comfort and national justice”. All the nineteenth-century writers who dealt with this matter had virtually the same idea in mind. But an attempt to extend this perfectly clear notion to conditions of less than full employment involves difficulties.
4. In the excellent Chapter 19, in which Keynes refutes the idea that sticky wages are responsible for recessions, he concludes a section by sarcastically noting that if sticky wages were the cause of recessions, we should want “monetary management by the trade unions”:
If, indeed, labour were always in a position to take action (and were to do so), whenever there was less than full employment, to reduce its money demands by concerted action to whatever point was required to make money so abundant relatively to the wage-unit that the rate of interest would fall to a level compatible with full employment, we should, in effect, have monetary management by the Trade Unions, aimed at full employment, instead of by the banking system.
What say you, libertarians?
5. At the very beginning of Chapter 21, Keynes notes the tension between monetarist reasoning based on the Quantity Theory of Money and conventional microeconomic theory. The former assumes a smooth, mechanistic relationship between the stock of money and the price level, but the latter teaches us that prices depend on microeconomic ‘fundamentals’ such as preferences and technology:
So long as economists are concerned with what is called the Theory of Value, they have been accustomed to teach that prices are governed by the conditions of supply and demand; and, in particular, changes in marginal cost and the elasticity of short-period supply have played a prominent part. But when they pass in volume II, or more often in a separate treatise, to the Theory of Money and Prices, we hear no more of these homely but intelligible concepts and move into a world where prices are governed by the quantity of money, by its income-velocity, by the velocity of circulation relatively to the volume of transactions, by hoarding, by forced saving, by inflation and deflation et hoc genus omne; and little or no attempt is made to relate these vaguer phrases to our former notions of the elasticities of supply and demand.
Keynes then goes on to anticipate Joan Robinson‘s simple but (IMO) rather damning critique of the QToM and the velocity of money as a concept:
But the “income-velocity of money” is, in itself, merely a name which explains nothing. There is no reason to expect that it will be constant. For it depends, as the foregoing discussion has shown, on many complex and variable factors. The use of this term obscures, I think, the real character of the causation, and has led to nothing but confusion.
So, there we have it: in a relatively small set of quotes, Keynes has forcefully critiqued neoclassical theories of capital and the rate of interest, the Quantity Theory of Money, the Natural Rate of Interest, the idea that sticky wages are responsible for recessions, and the idea that savings create investment. Then there’s the rest of the book, where he sort of invents macroeconomics (I know, I know – but he does bring it together far more effectively than anyone else before, and adds a lot along the way). There’s a reason books like this catch on.