Archive for August, 2012
Chapter 10 of Steve Keen’s Debunking Economics explores the reduction of macroeconomics to ‘applied microeconomics:’ representative agents, the macroeconomic supply/demand (IS-LM), Say’s Law and more. The Chapter is aptly titled ‘Why They Didn’t See It Coming’ – the reason, of course, being that the very premises of their models assumed away major episodes of instability.
Every producer asks for money in exchange for his products, only for the purpose of employing that money again immediately in the purchase of another product.
In other words: money is neutral, and the economy operates as if people are directly bartering goods between one another. Whilst individual markets may not clear, there cannot be a net deficiency of demand in all markets, and employment is largely voluntary, save perhaps that induced by ‘frictions’ as markets adjust. Say’s Law is rarely referenced explicitly by modern neoclassical economics, but it still lives on at the heart of many models – for example, the ‘equilibrium’ in Dynamic Stochastic General Equilibrium (DSGE) models assumes that all markets clear.
Keen notes that Keynes’ own formulation and refutation of Say’s Law was clumsy and turgid. Instead, Keen opts for Marx’s critique, which was far more concise and lucid. Keynes actually included Marx’s critique in his 1933 draft of The General Theory, but eventually eliminated it, probably for political reasons.
Say’s Law relies on a simple claim: the structure of a market economy is Commodity-Money-Commodity (C-M-C), where people primarily desire commodities and only hold money for want of another commodity. But Marx pointed out that, under capitalism, there are a group of people who quite clearly do not fit this formulation. These people are called capitalists.
Capitalist production does not take the form of C-M-C, but of M-C-M: a capitalist will invest money in production in the hope of accumulating more. As Marx put it, “[the capitalist's] aim is not to equalise his supply and demand, but to make the inequality between them as great as possible.” Say’s Law could be said to apply in a productionless economy, but capitalism is characterised by the value or quantity of produced goods and services exceeding the value or quantity of the inputs. Hence, there will always be a surplus of money needed to satisfy capitalist accumulation, and the economy will continually be characterised by excess demand for money, and hence insufficient demand for commodities.
Keen continues by noting the obvious accounting reality that, in order for the economy to expand, credit must fill this gap, and quotes both Schumpeter and Minsky saying the same. This, along with the logic of capital accumulation, is a major spanner in the works for Say’s Law. However, I will not explore the ‘credit gap’ any further here, as Keen goes into far more detail in later chapters.
IS/LM is a diagram that looks a lot like demand supply, and proposes that the interest rate and level of output in an economy are determined by two schedules: the equilibriums between the different levels of investment and saving, and the equilibriums between the money supply and the desire to hold money (liquidity preference). It was originally proposed as an interpretation of Keynes’ General Theory by Hicks in his 1937 review of the book, entitled Keynes and the Classics.
There are many problems with IS/LM. The model was a complete misinterpretation of Keynes, and basically an attempt to pass off Hicks’ own model – that was developed independently from Keynes* – as Keynes’ model. Hicks himself pointed out many of the substantive problems in his 1980 ‘explanation’ (Keen suggests it is really an apology).
The major problems are uncertainty and changing expectations. Hicks’ formulation of IS/LM uses a period of about a week, during which it is reasonable to suppose that expectations are constant. But if expectations are constant and therefore not uncertain, there is no room for liquidity preference, which Keynes justified as “a barometer of the degree of our distrust of our own calculations a conventions concerning the future.” So we must extend the time period.
Keynes’ original intent for the time period of his analysis was the ‘Marshallian’ definition of a short period – about a year. The problem is that at this point equilibrium analysis falls apart. Both curves are partially derived from expectations, and once these start changing, the curves are constantly shifting. Not only this, but since they both depend on expectations, a movement in one will affect the other, and they can no longer move independently.**
Ultimately, IS/LM reduced Keynes to a call for fiscal stimulus in the ‘special case’ that the LM curve was flat or close to flat (demand for money is ‘very high’ or infinite). ‘Later Hicks’ argued that the model should really not be intended as anything other than a “classroom gadget” – we might consider it a heuristic assumption, later to be replaced by something else (it is, in fact, replaced by DSGE past the undergraduate level). Personally I’m not sure that a model with internal inconsistencies should be used as a heuristic (and neither is Keen), and to be honest students have enough trouble understanding IS/LM that I don’t even think it qualifies as a potent tool for communication. In any case, we certainly don’t want to be referring to it in policy discussions.
Macroeconomics after IS/LM
The neoclassical economists didn’t like IS/LM either, but that was because it was not built up from the point of view of optimising microeconomic agents. Keen catalogues the ‘Rational Expectations‘ overthrowing of IS/LM and the ‘Keynesians,’ making the obvious observation that the idea people can, on average, predict the future, is stupid, and ironically completely fails to take into account Keynes’ concept of uncertainty. He notes that, while the broad thrust of the Lucas Critique is correct, it does not justify the idea that a policy change will be completely neutralized by changes in behaviour, and neither does it justify reductionism – microeconomic models are as ‘vulnerable’ to the critique as macroeconomic ones.
Keen then documents that the first attempt to model macroeconomics based on the ‘revelations’ of what he calls the ‘rational expectations mafia:’ Real Business Cycle models. Again, the original author of these models – Bob Solow – later repudiated them. In his words:
What emerged was not a good idea. The preferred model has a single representative consumer optimizing over infinite time with perfect foresight or rational expectations, in an environment that realizes the resulting plans more or less flawlessly through perfectly competitive forward-looking markets for goods and labor, and perfectly flexible prices and wages. How could anyone expect a sensible short-to-medium-run macroeconomics to come out of that set-up?
This is obviously ridiculous. There have, of course, been developments since the core RBC model was invented – the New Keynesian DSGE models include elements such as sticky prices, bounded rationality, imperfect (though as far as I know, not asymmetric) information and oligopolistic market structures. However, all of these preserve the neoclassical core of preference driven individualism, assume equilibrium, and keep one or two representative agents (any more and many of the core assumptions fall apart, in a clear and ironic example of emergent properties). The models also suppose that the economy has ‘underlying‘ tendencies towards stability, masked only by the pesky aforementioned real world ‘imperfections.’
Even despite all these developments, neoclassical economists continued to be led to absurd conclusions, such as the idea that unemployment during the Great Depression was voluntary (Prescott), the recession predated the collapse of the housing bubble (Fama), and blaming business cycles on the Fed suddenly deviating from its previous mandate for no reason (Taylor, Sumner, Friedman). And, of course, none of them foresaw the crisis and can only model it with some serious post-hoc ad-hocery.
It’s worth noting that representative agents, in and of themselves, are not a problem – the problem is that neoclassicism must stick to a small amount to preserve its assumptions, and for some reason refuses to use class as a distinction. Keen’s own models could be said to use representative agents, but the fact that he doesn’t build his model up from microfoundations means that he is far less hamstrung when adding new aspects and dynamics to the model. The idea that the macroeconomy cannot be studied separately from the microeconomy is deeply ascientific – the kind of thing that the real sciences learned to abandon long ago. It’s time economists caught up.
*Actually it was developed largely in opposition to Keynes – by Hicks, Dennis Robertson and others.
**Readers might notice a similarity between this ‘small scale versus large scale’ critique of IS/LM and Piero Sraffa’s argument against diminishing marginal returns.
This (semi-)post will meander.
The website mindful money followed up on its recent post about ‘New Economics’ sites, in which I was happy to be mentioned, with interviews with some of the bloggers, including Steve Keen and me. Here is the interview ‘home page,’ and here is my interview. A brief excerpt, sure to be hated by economists:
In economics, the elephant in the room is, and always has been, assumptions…many economic models are invalid before we even begin, simply because the assumptions don’t resemble the real world at all.
Vaguely related, I recently claimed on twitter that Bob Solow was the most quotable economist of all time. My above point about assumptions reminds me of another of his – on why he doesn’t engage neoclassical economists:
Suppose someone sits down where you are sitting right now and announces to me that he is Napoleon Bonaparte. The last thing I want to do with him is to get involved in a technical discussion of cavalry tactics at the Battle of Austerlitz. If I do that, I’m getting tacitly drawn into the game that he is Napoleon Bonaparte.
To this end, Miles Kimball – a guy who is so nice and open minded I feel like I am kicking a puppy by daring to disagree with him – has a post on economic models that I would surely be excoriated for (‘straw man’) if I were to post it as a parody:
The closest we can come to treating consumption, leisure and the public good in this model as ordinary goods is if we imagine a social planner…in other words, the social planner I am talking about is not a fallible human, but the Invisible Hand.
I’m sure Gavin Kennedy would take issue with the use of the Invisible Hand metaphor, but seriously? There is an obvious chicken and egg problem if we are to invoke the ‘free market’ as a mechanism before trade takes place. I mean, I also object to the idea that there is some sort of magical omnipotent force making everything perfect in a market economy.
In other news, OWS have a video in which Raghuram Rajan repeats various crap and John Cassidy is unable to escape the governments versus markets mentality. Nonetheless, I am glad to see it. Anyway, I could ramble on for a while but I’ll stop here.
Scientists originally believed that there were only 5 senses: touch, taste, smell, sight and hearing. To this day, most people would name these 5 if asked how many and which senses they have. Over time, however, scientists realised that there were far more ways in which we interacts with the world around us, such as balance, pain, and numerous internal senses that coordinate our organs. By now it is agreed that there are at least 9, and perhaps as many as 20 senses. I believe that economic factors of production are in the same stage the senses were when there were only thought to be 5 – some are excluded, some are lumped in with others, and generally there is no coherent definition by which to distinguish them.
At its most basic level, neoclassical theory only includes two factors of production: capital and labour. But this formulation is incomplete – first, aggregating the mysterious substance known as capital leads us into all sorts of problems. Second, it outright ignores many things that could be considered factors of production.
Even within what most people would commonly imagine to be ‘capital,’ there is a clear divide: between machinery, which can be combined with labour and materials to produce commodities, and the materials themselves. Consider the production of apple juice. Does it really make sense to define the juicer as the same type of factor of production as the apple? The apple is destroyed in the production process, whilst the juicer is not; the the apple is a running cost whilst the juicer is a fixed cost; the apple is essential to the production of the apple juice in question, whilst the juicer can be replaced by a more labour intensive or more capital intensive technique.* This distinction has been highlighted in some areas of economics, and is generally known as ‘working’ versus ‘fixed’ capital.
Another factor which can be separated out is land. As Georgists try to stress, land is fundamentally different from capital, in that it exists independently of people, is impossible to avoid using, and has a fixed supply. Hence, taxing land has completely different effects to taxing capital, because you cannot discourage the production of land. Furthermore, the returns to land are not ‘earned,’ because the landowners don’t actually have to do anything in the production process. Lumping land and capital together creates economic and moral problems.
According to Michael Hudson, the American School of Economics considered public enterprise a separate factor of production, which had the effect of decreasing costs for the other factors of production. This seems to make sense - infrastructure allows for cheaper and quicker transportation of goods and labour; education lowers hiring and training costs for firms; healthcare eliminates the cost of firms providing insurance for employees; basic security also greatly lowers or eliminates the need for firms to defend themselves from marauding thieves. Public enterprise also differs from other factors of production in that its funding is (obviously) public, rather than private. Overall it doesn’t seem to make sense to consider it the same as private capital, from which a direct profit is earned.
Finally, many have argued that an important factor of production is knowledge. After all, we cannot just throw land, labour and various types of ‘capital’ at each other; we have to know how to combine them. Different production techniques lead to different levels of efficiency (less time, more stuff). As the paper linked argues, knowledge differs from other factors of production for several reasons: it can be both ‘bad’ and ‘good;’ it is only useful in a specific time and place, and it generally comes ‘attached’ to other factors of production, and hence cannot always be separated out. Knowledge could also potentially be disaggregated into technological knowledge, the knowledge of each ‘factor of production’ (how to hold a spade) and knowledge of the production process or economy as a whole.
I’m sure one could argue there are many other factors that are required for, or help determine, the success of a production technique or entire economy: trust (if each factor of production thinks the other will just run away with the produce, growth will suffer), motivation (if nobody wants to produce anything, obviously nothing will be produced), natural resources (to commodities as land is to capital – exists independently of people, ultimately fixed in supply).
So how do we define a factor of production most coherently? Like the senses, there is some crossover between them and it is something of a judgement call as to when we should draw the line between different types. The criteria for the factors I have named seem to be that they are both necessary for almost any type of production to take place, and that each has something of a unique effect on production. But perhaps there are alternative definitions that could be more illuminating.
Chapter 9 of Steve Keen’s Debunking Economics criticises economist’s reliance on static models in a clearly dynamic system. He first shows both Walrasian equilibrium and Gerard Debreu’s related models to be highly questionable – this is, of course, not difficult, and will be met with ‘we have improved on those!’ However, the real message of the chapter is that static analysis is fairly worthless, and dynamic analysis does not simply ‘fill in the gaps’ between different equilibria.
If you are not familiar with Walras or Debreu, prepare to be amazed at how clearly unlike the real economy these models are.
Walrasian equilibrium supposes that an auctioneer has control over the buying and selling of every commodity, and determines the ‘market clearing’ price – where supply equals demand for every commodity – before any trade takes place. Walras suggested that the auctioneer start with a random guess, which would probably be wrong. They’d then go on to adjust prices until equilibrium was reached, at which point trade would take place.
Keen refrains from commenting substantially on the realism of this approach, instead taking his usual route of accepting economist’s logic, then showing that the model still can’t work. The maths is somewhat over my head, but Keen channels John Blatt, who uses a theorem of matrices – a mathematical system by which a Walrasian auction can be explained – to show that the auctioneers prices will not converge towards equilibrium.
Simply put, there are two conditions required to Walras’ auction to ‘work:’
- The system must be able to reproduce itself e.g. produce enough iron for the required inputs of iron in the next period.
- The prices must be ‘feasible;’ basically, they cannot be negative.
According to Blatt, these two conditions require the matrix and its inverse to have the same properties. In English, this means that something and its opposite have to have the same properties, which is obviously logically impossible. Hence, the auction will not converge to equilibrium.
Debreu did not worry about whether an economy would converge to equilibrium, but simply whether or not an equilibrium existed. However, the same conditions outlined above – not to mention the incredibly restrictive assumptions of Debreu’s model, such as virtually identical, prophetic actors – showed that even if equilibrium were achieved, it would be unstable.
Keen concludes that the elusive search for equilibrium is a dead end, and moves on to chaos theory, in which equilibria are unstable and rarely or never, reached, but clear patterns emerge:
The two ‘eyes’ here are the equilibria, and as you can see they are quite clearly not worth studying – what is instead needed are differential equations that describe the dynamic evolution of the system. Economists do have a more advanced definition of equilibrium, which states the time path for the economy, but it involves restrictive assumptions similar to Debreu’s, and is not on the same level of dynamism as chaos theory. Anyone untainted by neoclassicism will be able to see that the above pattern is similar in type to the cyclical behaviour of a capitalist economy, and that applying chaos theory to economics is surely an idea with potential.
Keen ends the chapter by giving a couple of examples of attempted dynamic (though not chaotic) analysis – the Goodwin model, based on Marx’s analysis of the relationship between wages, investment and capital, and A .W. Phillip’s ill-fated attempts to bring dynamic modelling into economics. Contrary to popular belief, Phillips was well aware of expectations and how they changed, and incorporated this into his model. Both of these models produced realistic business cycles, as do Keen’s similar model (which we will come to in a later post).
But economists reject this type of analysis because…engineers don’t know what they are doing? The empirically successful microfoundations project? Assumptions don’t matter but do when they aren’t ours? I honestly just don’t understand.
Chapter 8 of Steve Keen’s Debunking Economics channels a paper (it’s short, and worth reading) by the Philosopher Alan Musgrave, which distinguishes between the 3 types of assumptions: negligibility, domain and heuristic.
According to Friedman’s 1953 essay, theories are significant when they “explain much by little,” and to this end “will be found to have assumptions that are wildly unrealistic…in general, the more significant the theory, the more unrealistic the assumptions.” By distinguishing between the different types of assumption Musgrave shows how Friedman misunderstands the scientific method, and that his argument is only partially true for one type: negligibility assumptions, which we will look at first.
Neglibility assumptions simply eliminate a specific aspect of a system – friction, for example – when it is not significant enough to have a discernible impact. Friedman is correct to argue that these assumptions should be judged by their empirical corroboration, but he is wrong to say that they are necessarily ‘unrealistic’ – if air resistance is negligible then it is in fact realistic to assume a vacuum. I don’t regard many economic assumptions as fitting into this category, though many of the examples Friedman argues a theory would need to be ‘truly’ realistic, such as eye colour, fit the bill.
If a theory fails to corroborate with the evidence, it may be because the phenomenon under investigation does require that air resistance is taken into account. So the previous theory becomes a ‘domain’ theory, for which the conclusions only apply as long as the assumption of a vacuum applies. Contrary to Friedman, the aim of ‘domain’ assumptions is to be realistic and wide ranging, so that the theory may be useful in as many situations as possible. Many of the assumptions in economics are incredibly restrictive in this sense, such as assuming equilibrium, neutrality of money or ergodicity.
A heuristic assumption is a counterfactual proposition about the nature of a system, used to investigate it in the hope of moving on to something better. These can also be retained to guide students through the process of learning about the system. If a domain assumption is never true, then it may transform into a heuristic assumption, as long as there is an eye to making the theory more realistic at a later stage. The way Piero Sraffa builds up his theory of production is a good demonstration of this approach: starting with a few firms, no profit, no labour, and ending up with multiple firms with different types of capital and labour. In this sense many economic models are half-baked, in that they retain assumptions that are unrealistic for phenomena that are not ‘negligible,’ even at a high level.
Musgrave colourfully describes the evolution of scientific assumptions:
what in youth was a bold and adventurous negligibility assumption, may be reduced in middle-age to a sedate domain assumption, and decline in old-age into a mere heuristic assumption.
Musgrave is partially wrong in this formulation, in my opinion – assumptions can start out as heuristics and become domain later on, such as perfect gas or optimising bacteria. But there are always strict criteria for when the theory built on the assumption simply becomes useless, and there is always a view to discarding the heuristic when something better comes along. Economic theory tends to weave between the different types of assumptions without realizing/drawing attention to them.
Keen ironically notes assumptions obviously matter to economists – they just have to be Lucas Approved™. The reaction by many neoclassical journals to papers such as his, which do not toe the party line with assumptions, demonstrates his point effectively. He also points out that, in fairness to neoclassical economists, the hard sciences are not necessarily the humble havens they are made out to be, and to this day physicists are resistant to questioning accepted theories. However, it is true that economists seem to be more vehement in the face of contradictory evidence than anywhere else.
I see this as a case closed on Friedman’s methodology. Economists need to learn to draw attention to exactly which type of assumption they are making in order for the science to progress, else risk having no clear parameters for where a theory should be headed, and under which conditions it can be considered valid.
I don’t really understand Nick Rowe’s suggestion that heterodox critics read a first year textbook, considering economists generally like to respond to criticisms by asserting that economics goes well beyond what is taught early on. A commenter on his post agrees. In any case, let’s take him up on his offer.
Here are some examples of how undergraduate economics is taught from two textbooks, this one and this one. The worrying thing is that these textbooks are more reasonable than average, and give the time of day to a lot of substantive criticism (although they do ignore them in the main analysis).
Economics textbook in ‘reality doesn’t matter’ shocker
…[the student] rightly assumes that few firms can have any detailed knowledge of marginal revenue or marginal cost. However, it should be remembered that marginal analysis does not pretend to describe how firms maximise profits or revenue. It simply tells us what the output and price must be if they do succeed in maximising these items, whether by luck or judgement.
Where to start?
(1) Why not get access to data and just tell students how firms actually price?
(2) A big problem with this is that it implicitly assumes a static state where firms push to capacity to maximise short term profit. But any firm that did this would be vulnerable to changing conditions – the reality is that firms generally hold a degree of excess capacity to change their level of output in response to market conditions.
(3) Firms do not experience diminishing marginal costs! This means using MC as a constraint is completely bunk. Firms are constrained, primarily, by financing and marketing considerations.
The lesson here is that, even if you think your logic for how a firm should maximise profit is sound, the evidence has the final say – it might take you an a direction you couldn’t have predicted within the confines of your own theory.
Workers are selfish and lazy, no evidence required
In many jobs it is difficult to monitor the effort individuals put into their work. Workers may thus get away with shirking or careless behaviour…the business could attempt to reduce shirking by imposing a series of sanctions, the most serious of which would be dismissal. The greater the wage rate currently received the greater will be the cost to the individual of dismissal…the business will benefit from the additional output.
What? Oh, no, there’s no evidence to support this. Aside from the fact that workers might actually want to work a decent amount, modern workplaces are tailored such that shirking isn’t really possible (Harry Braverman presented evidence for this, I cannot find it online).
Yes, neoclassical economics does teach exogenous money. Why do you ask?
Contrary to what many keep saying, economics textbooks do teach the money multiplier:
Models of the money supply multiplier link the money supply to the monetary base in a relationship of the following form:
M = mB
M = the money supply;
m = the money supply multiplier;
B = the monetary base.
In models such as this, m tells us how many times the money supply will rise following an increase in the monetary base.
They go on to assert that it is more complicated than a simple stable relationship. But the basic message is that, dependent on m, B will translate into a certain level of M. Banks still need deposits before they can lend out. Again, this is an inaccurate description of a credit economy, where the causation goes in the other direction.
Economists seem to have a hard time grasping that saying ‘m isn’t stable’ isn’t the same as endogenous money theory.
Inconsistency in labour supply curves
This textbook (and most ones) assert the standard labour supply story: workers trade off work and leisure up to the point where more leisure is better than more money. The two conflicting income and substitution effects mean that higher wages can either decrease or increase the hours worked, depending on how much money the worker wants. This can create a ‘backward bending labour supply curve,’ where higher wages increase hours worked up until a point, then start to reduce them (yes, economists do enjoy putting the dependent variable on the x-axis, god knows why):
The problem is that the textbook says that the market supply of labour “will typically be upward sloping” because “the higher the wage rate offered…the more people will want to do [the] job.” But if you add up many individual backward bending supply curves, you will not get an upward sloping line.
We’ve taken a bit of a leap here. Before we were talking about hours, now we are just talking about a job. The jump from hours to job requires that hours are assumed fixed – that’s fine, and actually a realistic assumption (the laws of probability suggest they had to hit one eventually), but it contradicts the earlier hypothesis that workers smoothly trade off work and leisure.
As far as I’m concerned textbooks are littered with these problems, and it seems to get worse, rather than better, as the theory gets more advanced. So, economists: your turn. Read this. Varoufakis and his coauthors are sophisticated critics of neoclassicism so you won’t find the traits you so loathe among heterodox economists. It will also give you a good idea of where your critics are coming from, when to be honest it’s increasingly clear that you don’t really know much about heterodox economics. In any case, you seem confident your ideas are strong, and if so they will be strengthened by criticism. If not, then they will need rethinking.