Archive for April, 2012
I’ve touched briefly before on how behavioural economics makes the central libertarian mantra of being ‘free to choose’ completely incoherent. Libertarians tend to have a difficult time grasping this, responding with things like ‘so people aren’t rational; they’re still the best judges of their own decisions.’ My point here is not necessarily that people are not the best judges of their own decisions, but that the idea of freedom of choice – as interpreted by libertarians – is nonsensical once you start from a behavioural standpoint.
The problem is that neoclassical economics, by modelling people as rational utility maximisers, lends itself to a certain way of thinking about government intervention. For if you propose intervention on the grounds that they are not rational utility maximisers, you are told that you are treating people as if they are stupid. Of course, this isn’t the case – designing policy as if people are rational utility maximisers is no different ethically to designing it as if they rely on various heuristics and suffer cognitive biases.
This ‘treating people as if they are stupid’ mentality highlights problem with neoclassical choice modelling: behaviour is generally considered either ‘rational’ or ‘irrational’. But this isn’t a particularly helpful way to think about human action – as Daniel Kuehn says, heuristics are not really ‘irrational’; they simply save time, and as this video emphasises, they often produce better results than homo economicus-esque calculation. So the line between rationality and irrationality becomes blurred.
For an example of how this flawed thinking pervades libertarian arguments, consider the case of excessive choice. It is well documented that people can be overwhelmed by too much choice, and will choose to put off the decision or just abandon trying altogether. So is somebody who is so inundated with choice that they don’t know what to do ‘free to choose’? Well, not really – their liberty to make their own decisions is hamstrung.
Another example is the case of Nudge. The central point of this book is that people’s decisions are always pushed in a certain direction, either by advertising and packaging, by what the easiest or default choice is, by the way the choice is framed, or any number of other things. This completely destroys the idea of ‘free to choose’ – if people’s choices are rarely or never made neutrally, then one cannot be said to be ‘deciding for them’ any more than the choice was already ‘decided’ for them. The best conclusion is to push their choices in a ‘good’ direction (e.g. towards healthy food rather than junk). Nudging people isn’t a decision – they are almost always nudged. The question is the direction they are nudged in.
It must also be emphasised that choices do not come out of nowhere – they are generally presented with a flurry of bright colours and offers from profit seeking companies. These things do influence us, as much as we hate to admit it, so to work from the premise that the state is the only one that can exercise power and influence in this area is to miss the point.
The fact is that the way both neoclassical economists and libertarians think about choice is fundamentally flawed – in the case of neoclassicism, it cannot be remedied with ‘utility maximisation plus a couple of constraints’; in the case of libertarianism it cannot be remedied by saying ‘so what if people are irrational? They should be allowed to be irrational.’ Both are superficial remedies for a fundamentally flawed epistemological starting point for human action.
The doublethink with which mainstream economists are able to ’embrace’ new economic thinking and simultaneously shout down any attempt at, well, new economic thinking, is quite incredible. For example, see the infamous Krugman/Keen debate, where Krugman behaved startlingly similarly to his usual opponents on the right, even despite his own essay readily acknowledging the failure of economics in the crisis. Mark Thoma also had a similar reaction to Keen, but he then went on to write an essay praising new economic thinking.
In fairness to Thoma, his essay appears to acknowledge that economists like him are set in their ways and cannot embrace change on the scale required, but of course this in itself isn’t encouraging. Why can’t they change their ways? Why carry on if you strongly suspect your paradigm is flawed? I am reminded of a Richard Dawkin’s documentary, where he spoke of a scientist who had been working on a theory for the best part of his career. A new scientist arrived at his place of work, and falsified the theory – the older scientist, however, thanked him. This type of attitude would be helpful in economics.
Watching economists react to Keen’s work, and the work of others, I feel there are a few major barriers to economists accepting new economic thinking; once these are addressed they, hopefully, will find it easier.
Identifying Neoclassical Economics
Neoclassicism is seen by some economists as either a non-existent school of thought; a swear word used by their opponents, or as a long outdated paradigm which has been abandoned in favour of sticky wage/price models and other developments in the DSGE framework. So the first step towards engaging sympathetic neoclassicals is convincing them that their school exists.
I have commented on this briefly before, but I think that Christian Arnsperger & Yanis Varoufakis’ essay on this subject is excellent. It identifies neoclassicism as a methodology, rather than as the ‘rational self maximising, perfectly informed’ agent criticism that economists are so able to brush off with appeals to higher level work. That neoclassical economics uses methodological core of individualism, instrumentalism (preference satisfaction) and equilibration is hard to dispute, and so is an important starting point different schools to engage each other in both directions.
The Lucas Critique
(1) It has given grounds for economists to revert to their old mantra of ‘that’s OK in practice, but does it work in theory?‘ Krugman’s first post on Keen mentions that there is ‘a lot of implicit theorising’ in his paper – in other words, there are no microfoundations. Krugman uses this as grounds to dismiss the overwhelming empirical relationship between private debt and other economic variables.
(2) In practice, application of the Lucas Critique has basically amounted to the use of rational expectations and representative agents, rather than any deep change in economic modelling. There is a great discussion of the flaws in these approaches here, but that’s not necessarily relevant – what matters is that the LC is only applied sparingly.
(3) There is no empirical evidence to support it’s application – that is, it doesn’t appear to be useful when developing new theories or policies.
(4) The most important criticism of Lucas’ paper is that he suggests we model based on the ‘deep parameters’ of human behaviour. As anyone with even a passing familiarity with anthropology and history should know, these parameters simply don’t exist. You can find people throughout history behaving in any number of ways, both as societies and as individuals – even the most basic instincts, such as the need for sustenance and reproduction, have been overcome by environment (abstinent monks, lent, self sacrifice). The fact is that, for economists, ‘deep parameters of human behaviour’ seems to mean nothing more than the individualist, instrumentalist core outlined by Arsnperger & Varoufakis. This is as vulnerable to the Lucas Critique as any other theory or methodology.
So what are we left with? In essence, a suggestion that using a model for policy might have unintended consequences. This is true, and unfortunate, but it’s the reality of the social sciences, and has been known for a long time.
The ‘It Doesn’t Matter’ Mentality
Economists sometimes acknowledge that a model is flawed, but assert that the real world still behaves as if it corroborates with their models. This mentality can be found in one of my textbooks:
…[the student] rightly assumes that few firms can have any detailed knowledge of marginal revenue or marginal cost. However, it should be remembered that marginal analysis does not pretend to describe how firms maximise profits or revenue. It simply tells us what the output and price must be if they do succeed in maximising these items, whether by luck or judgement.
And also in Nick Rowe, defending exogenous money on the grounds that:
So the central bank must stop them creating loans and deposits out of thin air. The central bank will raise its rate of interest by whatever it takes to stop banks creating loans and deposits out of thin air. It is exactly as if the banks were reserve-constrained and couldn’t create money out of thin air.
This is one of those positions that I find it hard to articulate a response to. Of course it matters that we get the mechanics of a system right, otherwise we simply don’t have a model of the system – we’ve got something else! This is what I’ve been trying to get at with useful assumptions – useful ones simply eliminate a complication, whereas ones that are essentially hypotheses about how agents behave can be falsified in their own right. Economists seem to enjoy clinging to the ‘hypothesis’ variety of assumption, and this needs to stop.
There are some other important traps economists fall into – three of which I mentioned in my post on how to unlearn economics. From the perspective of accepting new economic theories, however, these three (and maybe the third one in the aforementioned post) are the most important – if they are not addressed, heterodox and mainstream economists will continue to talk past each other.
If I have any MMT readers, I’d appreciate it if they could answer this brief query:
MMT suggests that governments are not revenue constrained; they are only constrained by inflation, and until that becomes a problem they can print however much money they want to fund expenditures. Once inflation does become a problem, they can tax away the excess income. This is correct, no?
So here is my problem: if governments are inflation constrained, and they reduce inflation via taxation, isn’t this ultimately the same thing as if they were revenue constrained? Why not eliminate the middle man and simply raise taxation to fund expenditures?
I’ve developed an aversion to the use of the word ‘fallacy’ in economics, as it seems to be little more than a tool people beat others over the head with when they don’t want to engage in critical thinking. Often, the so called ‘fallacies’ that are trotted out in economics are used inappropriately and the stories told to justify them require exploration.
Here I present the three worst offenders – note that I don’t disagree with all of them entirely, but just wish to highlight that the story is often not a simple as it seems and cannot be captured by simply shouting down your opponent with the word ‘fallacy’.
The Broken Window
In the essay ‘What Is Seen And What Is Not Seen‘, a 19th Century economist named Frederic Bastiat wrote a story about a boy who breaks a shopkeeper’s window. In replacing it, the shopkeeper gives money to the glassman, and the town observes that the broken window provided a boost to the local economy. However, Bastiat emphasises that this fallacy ignores the unseen fact that, had the window not been broken, the shopkeeper would have bought a new pair of shoes. Hence, there is no net gain for the economy.
Don’t get me wrong, it’s an important essay with an important point: if you look at only the benefits of government programs, you miss the hidden costs – where the tax money would otherwise have been spent, money unspent due to tariffs, and so forth.
However, Bastiat makes two hidden assumptions:
(1) All money that is spent would have been spent elsewhere – call this Say’s Law.
(2) That the replacement for the proverbial broken window is not better in any way.
The problem with (1) can be demonstrated by supposing that the shopkeeper was, in fact, not going to spend his money at all – in that case there would have been a boost to the economy. Not the best way to boost income, perhaps, but an income boost nonetheless. If the economy is not at full employment then spending more money does not require that you displace existing spending – to argue the opposite is to argue that private sector spending cannot increase employment either.
The problem with (2) can be illustrated by supposing that the shopkeeper’s window had been in a poor state to start with. In that case, replacing the window would have had a degree of benefit to the shopkeeper greater than in the more simple version of the story. After all, proponents the broken window fallacy often speak approvingly of creative destruction- replacing old capital and ideas with new, better capital and ideas. A similar logic applies – again, going around breaking things that seem worn out isn’t a suggested strategy for development, but it’s not as clear cut as it first seems. (‘Broken windows’ can also become a rationalisation for renovation – ‘it’s about time we redid the shop front anyway’, etc.)
Lump of Labour
This ‘fallacy’, as with the rest of labour economics, was born as a reaction to working class political movements in the late 19th century. It is basically an argument against shorter working hours – the claim is that you can’t simply ‘split up’ existing working hours, as the productivity of one sector of the economy has an impact elsewhere. Therefore, working people don’t understand econ101, etc.
The vacuousness of the fallacy can be seen in defenses of it, which seem to insist that higher productivity implies more income and more work. Take, for example Paul Krugman’s hot dog story:
But wait–what entitles me to assume that consumer demand will rise enough to absorb all the additional production? One good answer is: Why not?
Great. I’ve got a better answer: Why? Why not reduce working hours? What about limited natural resources? What if people don’t want twice as many hot dogs?
The blogger named ‘Sandwichman‘ appears to have made it his task to demolish this supposed ‘fallacy’, and has a website devoted solely to this cause. Sandwichman’s main point is that Lump of Labour proponents often mischaracterise their opponents as assuming there is a ‘fixed’ amount of work to be done in an economy, when of course they do no such thing. Discussions over the amount of work to be done is irrelevant to discussions of how that work should be distributed.
Correlation-Causation/Post Hoc Ergo Propter Hoc
Confusing correlation for causation, of course, is a fallacy, but this does not justify the mirror image delusion that correlation is meaningless. Often an observed correlation in the data has an implicit, intuitive causal link, such as alcoholism and recessions, corporate savings and unemployment, or, in the case of austerity, spending cuts/tax increases and a stagnating economy (I have seen some on the right shout down criticisms of austerity producing low growth as post hoc ergo propter hoc. Of course this is ridiculous – we know exactly why austerity produces low growth) .
If, as in the case of Steve Keen’s work on private debt, you have a clear theoretical link between two things, strong correlation, and the numbers changing in the right order (a decline in private debt acceleration portends lower growth), these things cannot be dismissed on grounds of correlation causation and post hoc ergo propter hoc.
Of course, there are many more – even the well established logical fallacies are prone to misuse and misconception (if you think about it, appeal to authority is historically quite a large component of scientific progress). Specifically, economic fallacies are generally an attempt to look for easy answers in a complex field, when the real story is often far more nuanced. It is important not to fall into this trap of lazy argumentation that often pervades the internet.
Steve Randy Waldman has a good post on Interfluidity in which he attempts to form a synthesis between New Keynesians (NKs), post-Keynesians (PKs) and Market Monetarists (MMs).
Waldman actually exposes a bit of a fault with post-Keynesianism: what exactly are the policy prescriptions? Or, more specifically: how should monetary policy be conducted? PKs generally want to channel bank’s lending to the right people; we’re generally in favour of fiscal stimulus during downturns; Steve Keen has a policy prescription for redefining shares that I’m not entirely sure I understand the implications of, but it would be hard to say anything about a shared stance on monetary policy.
Waldman fills this gap by assuming that PKs don’t have a problem with NGDP targeting in principle, though they may doubt its practicality at the zero bound. However, I have spoken before about how NGDP targeting ignores the role of interest rates in determining not only the rate but the type of investment that takes place – instead assuming that macroeconomic policy can only reliably influence nominal variables. Scott Sumner, in fact, appears to believe that under NGDP targeting, the interest rate would be irrelevant.
Sumner is actually incredibly vague about why NGDP is the correct indicator for monetary policy: he has previously refused to discuss transmission mechanisms, and appears to think the the general public understand what the monetary base is, a position that goes hand in hand with his emphasis on expectations. In fact, I’d go so far as to say the entire thing is becoming circular: the CB controls NGDP so it should target NGDP – we will judge this by the level of NGDP.
PKs & MMTers, contrary to MMs & NKs, view interest rate policy as exogenous, and the only monetary variable that the CB can reliably control. In fact, as Edward Harrison notes, this is probably the major difference between exogenous versus endogenous money.
The endogenous view lends itself to the views of Keynes himself, who saw low rates as the appropriate monetary stance. In this view, interest rates are a cost of investment and so if they increase it will have two effects:
(1) Net investment will decrease;
(2) Businesses that do invest will be forced to seek higher returns and therefore take more risk. This can lead to speculative bubbles.
However, evidence suggests that businesses making investment decisions do not look at short term interest rates – both because they are prone to changes, and because they are too, well, short term. The Radcliffe Report, for example, emphasises that business decisions are far more heavily influenced by long term rates of interest, and also by expectations over the future path of the long term rate of interest. Thus, successful monetary policy lies in a credible commitment to, and execution of, permanently low long term rates. This also entails that monetary authorities have discretion over their jurisdiction, so capital controls would be a requirement.
As PKs & MMTers generally reject the IS/LM approach to the interest rate, generally sympathise with the views of Keynes himself and generally disregard ‘libertarian’ considerations when discussing international stabilisation, I do not see much of a reason that they should object to such a policy prescription.
As it happens, an essay by Christian Arnsperger & Yanis Varoufakis may provide us with the answer. In this essay, Arnsperger and Varoufakis attempt to define neoclassical methodology, hoping to nullify its lizard-like ability to dispose of certain parts in order to evade criticism. Personally, I think they hit the nail on the head.
They provide three axioms which define neoclassical methodology:
(1) Methodological individualism – the economy is modeled on the basis of the behaviour of individual agents.
(2) Methodological instrumentalism – individuals act in accordance with certain preferences rankings, to attain some end goal that they deem desirable.
(3) Methodological equilibration – given the above two, macroeconomics asks what will happen if we assume equilibrium. Note that this doesn’t necessarily posit that the system will end up in equilibrium (although that is often the case), but rather seeks to find out what will happen if we use equilibrium as an epistemological starting point.
I will not criticise the axioms here, but suffice to say that this gets to the crux of what the arguments have been about. This methodological core underlies everything from demand-supply to game theory to DSGE.
Much like the assumption of circular orbit, the methodological core of neoclassicism is at all times protected as it develops. Most neoclassical economists don’t think twice about the axioms, and this helps them deny that they are, in fact, ‘neoclassical’, seeing it only as a buzz word used by their enemies.
In fact, neoclassical economics has a habit of preserving not only these three axioms, but also many other assumptions it introduces. For example, take the case of Krugman and Eggertson versus Keen. Keen models the banks as explicit agents and creators of purchasing power, whilst Krugman and Eggertson preserve the ‘banks as intermediaries between savers and borrowers’ line, abstracting them out the economy, and ad-hocing a role for private debt.
You can also see these axioms in criticisms of Keen’s models. Krugman says that there is ‘a lot of implicit theorising’ going on in Keen’s paper. Perhaps this is true and maybe Keen needs to clarify his epistemology, but what Krugman really means – unknowingly, perhaps – is that Keen doesn’t start from the three axioms: he isn’t looking at individual behaviour, instead at the flow of money between agents; nobody is acting in accordance with attaining certain preferences; equilibrium is not used as a starting point. From my experience, I strongly suspect that most mainstream economists feel a similar skepticism when reading Keen’s paper.
I believe that in order for the debate to move forward, these 3 axioms – and others that are protected by the ad hoc style of DSGE – must be focused on and criticised. Otherwise critics will never land a convincing blow, and will be forever accused of straw manning.
* As a note, Austrians, this is why I link you with neoclassicism. The first two certainly define all of Austrian economics, and, at least in the case of Hayek, you also use equilibrium as an epistemological starting point.
Continuing the series on just how poor debate in the political arena has become, it’s time to look at the discussion of marginal tax rates, and by extension, the Laffer Curve and ‘trickle down‘ economics. These two represent the main (technocratic) arguments against higher marginal tax rates, so I will deal with them in turn.
The Laffer Curve
So goes the argument: don’t raise taxes on the rich, they will work fewer hours, move abroad, create less wealth, and so forth. It’s usually accompanied by logical ‘gotchas’ like this:
It’s intuitively appealing to people like us of course and as with the Laffer Curve at extremes it’s clearly and obviously true. Zero tax means zero government and without at the very least some form of defense, police and a criminal justice system there’s not going to be much economic growth. When the government takes and spends all of the economy there’s not likely to be much either.
Of course, even this isn’t true. The USSR effectively had 100% tax rates, give or take a few incentive schemes, and managed to achieve positive growth rates throughout its existence. At the other end, whilst I’m not much of an anarchist, no government doesn’t necessarily lead to no growth whatsoever, as demonstrated by various stateless and taxless communities across the ages.
Why? I can think of a few reasons:
– Economic rents are pervasive – as Schumpeter said, they are the ultimate aim of any capitalist firm. Taxing this unproductive activity discourages it and therefore encourages productive activity, meaning higher taxes can increase income. Michael Hudson’s video on this is recommended.
– As Kimel notes, if you reinvest your income it is tax deducted, which means that higher marginal rates will encourage investment.
– If a rich person can avoid/evade tax, they will – doesn’t matter how high the rate is.
– Corporations and rich people aren’t as mobile as they’d have us believe, as there are legal and personal barriers to simply upping and moving abroad. As Ha-Joon Chang says, ‘Capital has a nationality’.
Whatever the reasons, the evidence is fairly conclusive: the Laffer Curve is a useless idea; there are so many conflicting factors that it probably resembles a rollercoaster at any one time, and something like a chaos pendulum over time.
‘Rich people create jobs’ is a mantra that is often repeated – sometimes, for obvious reasons, by rich people themselves, but even more sadly by useful idiots. Perhaps they revere the rich; possibly they expect to join them one day (of course, probability suggests that they won’t).
The basic fact is that jobs are created by demand. Ask any employer – a firm hires people if there is enough demand for its products to warrant said hiring. To the extent that taxes and regulations have a negative effect, it tends to be higher prices. What’s more, this obviously occurs more in firms who can afford to increase prices; that is, those with market power – the richest ones. In other words, higher taxes have the least impact on the hiring practices of the rich.
What’s more, when there is demand, it is small businesses who create the majority of jobs – a study by the University of Nottingham estimates their share at 65%. So the argument holds no water by any metric.
In summary, the Laffer Curve simply doesn’t exist in its napkin form, and trickle down economics is completely incoherent and at odds with all the evidence. No wonder they’re all that Republicans can talk about.
Paul Krugman and Steve Keen have been debating endogenous versus exogenous money – as well as some other issues – for the past few days. The debate appears to have drawn to close, so here I offer a summary for those who can’t see the wood for the trees.
2. Keen responds, noting that banks do not require savings before they make a loan, as they can create loans and deposits simultaneously through double entry bookkeeping. The CB has to provide the reserves required for whichever loans they do make in the short term, else the economy will grind to a halt.
3. Nick Rowe weighs in, with a comment thread well worth reading. He sides with Krugman overall but appears to agree with at least some of what endogenous money proponents are claiming, including the the double entry accounting view of money creation.
4. Krugman, however, continues to deny this, claiming that CBs have monetary control, and citing a paper by James Tobin to support his point of view. He fails to note that, not only did nobody ever assert that the CB has no control whatsoever over monetary activity, but Tobin also wrote a paper called ‘Commercial Banks as Creators of ‘Money’‘, in which he agrees with the view that Krugman opposes.
5. Scott Fullwiler schools Krugman on how banking actually works in the real world.
6. Krugman makes a post where, through a sleight of hand, he seems to acknowledge that banks can create money, but goes on to straw man endogenous money proponents by saying that they claim there is no limit to this process. Of course, that’s not true – the only claim is that reserves are not the limit, the actual limitations being capital, risk and interest rates.
7. Krugman, unfortunately, goes on to make another post, one in which he effectively asserts that the Central Bank has complete control of the money supply, something completely contradictory to what he said before and blatantly falsified by the failure of monetarism in the 80s.
8. Krugman and Rowe both parade their ignorance by making it clear they have not read Keen’s latest post properly, and fall straight into his characterisation of DSGE. Keen responds. Krugman says the debate is over.
Looking over the debate, I’d score it to Keen – you might expect that, but I genuinely went through periods where I thought he might be wrong. Sadly, Krugman quite clearly moved the goalposts a couple of times, and Rowe didn’t make it exactly clear where he stands, even after I asked him. Neither of them engaged properly with Keen’s or anyone else’s arguments.
I can’t help but feel that the orthodox economists were deliberately obfuscating the debate – making it unclear exactly what they advocate, but simultaneously clinging to a core theory and asserting that its critics are attacking a straw man, ignorant of what is ‘added’ at a higher level. I’m forced to wonder if their theories are simply immune to falsification.
NB: A couple of others provide some constructive comments on Keen’s slack definitions in his most recent paper, particular with respect to units, that are worth reading in their entirety. Having said that, Keen’s accounting appears to be correct, even if it’s not the clearest.