Posts Tagged Debunking Economics
This is my final post on Steve Keen’s Debunking Economics, just to close the series and give some thoughts on the book as a whole.
The main aim of blogging Keen’s book was really to provide a platform for people to discuss the numerous inconsistencies (whether purported or real) that have arisen in neoclassical economics over time. Generally a Google of Keen will, predictably, contain outright (often vitriolic) dismissals from economists, coupled with some cheer leading from those on his side. Rarely, in my experience, will you find much substantive discussion of his ideas. Hopefully I’ve communicated these ideas in a (relatively) digestible way, and they can now be discussed openly.
I would encourage people who have enjoyed the series to read the actual book. My presentation of each chapter was necessarily shortened, omitting certain prongs of criticism: discussions of the history of thought; numerical examples; plain leaving out certain parts (for example Keen’s firm simulation model and his discussion of Von Neumann on utility). For this reason, anyone truly interested in Keen’s criticisms should read the book first hand. However, I would not recommend it alone for anybody not already versed in some basic economics and mathematics. Keen does a good job of explaining the concepts he is going to critique, but the fact is that both explaining and ‘Debunking’ Economics perfectly in a single book is simply not possible.
There is a largely superficial criticism of Keen that you will see floating around (so superficial that I am loath to address it formally): that he claims to become sort of mathematical genius blessed with insights that economists have missed for 100+ years. Of course, this is nonsense – there is not a single area in the book where Keen claims outright originality. Every critique he channels was either first noticed, else fully elucidated by, another economist or academic (often neoclassical economists themselves). Keen’s book is a culmination of a century of criticisms, all of which have been swept under the rug or dismissed, often without due justification.
Keen’s approach of critiquing each area on the grounds of internal inconsistency certainly has both advantages and drawbacks. The main advantage is that the critiques are not interdependent, so even if one fails to hold then it can still be shown that there are significant flaws in neoclassicism. The main disadvantage with this approach is that it requires Keen to assume concepts he criticises elsewhere are actually sound. Such an approach is almost bound – by probability – to be hit and miss. Can we really hope to show that absolutely every facet of neoclassical economics is internally inconsistent? In my opinion, Keen’s quest to dismantle neoclassicism from every angle might at times leads him astray from the overall goal. (The approach also necessitates some repetition, and I felt that some of the chapters could have been better arranged – for example, chapters 11 and 15 could surely have been merged).
Nevertheless, the match must be scored to Keen overall. His approach is helpful insofar as it sheds some light into what can often be a ‘black box’ of assumptions and mechanics that comprise many neoclassical models. What is really needed now is for someone to build a ‘ground up’ critique that combines discussion of conceptual errors, contradictions, and empirical irrelevance. Keen does not really talk about conceptual errors, and only really discusses empirical evidence in his section on alternatives. For me, a sustained critique would reject key areas of neoclassicism on various grounds, building up a positive heterodox view based on alternatives along the way. But I understand that was not really Keen’s intent.
I’m sure I will be drawn back to commenting on Keen’s work in the future, hopefully because economists continue to pay attention to it, whether civil or not. But this post concludes my comments on Debunking Economics.
Update: some commenters seem to have interpreted this post as me ending the blog. That is not so! The blog existed for 6 months prior to this series and will continue after it.
This is the second part of my response to criticisms of Keen’s Debunking Economics. In my previous post* I covered some of the fundamental objections Keen had to neoclassical theory. Here, I will cover Keen’s exploration of alternatives: first, a brief note on dynamics and chaos theory; then a discussion of Keen’s own models; finally, his dismissal of the Marxist Labour Theory of Value (LTV).
Dynamics and Equilibrium
Many economists have argued that Keen’s contention that economists do not study dynamics is false. I agree. Keen does not really address the DSGE conception of equilibrium, which is highly different to the typical conception of a steady state. An equilibrium in an economic model occurs when all agents have specific preferences, endowments etc. and take the course of action which suits them best based on this. This can be subject to incomplete information, risk aversion or various other ‘frictions.’ These agents intermittently interact in market exchanges, during which all markets clear. Basically, ‘solving for equilibrium’ means you specify the actions and characteristics of economic agents, then see what happens when markets clear. It’s entirely possible that the subsequent model could exhibit chaotic behaviour.**
Now, there are obviously many problems here. The fact is that the overwhelming majority of people who learn economics will not touch this. They will instead be faced with static-style equilibrium models, which they have been told are unrealistic but ‘elucidate certain principals.’ This is nonsense – they elucidate nothing, and simply need to be thrown out. Nevertheless, many policymakers, regulators and business economists are working under this framework. Furthermore, even those economists who have gone beyond this level seem to have the concepts deeply ingrained into their minds, and regard them as useful.
However, even the more advanced ‘dynamic’ equilibrium clearly has problems. First, the presence of irreducible uncertainty – which, as far as I can see, is a concept entirely misused by economists – means that it is virtually certain not all expectations will be fulfilled, while equilibrium assumes they will be. Second, ‘fulfilled expectations’ is far stronger than economists seem to think – for example, it eliminates the possibility of default! Third, the assumption that all markets clear is obviously false, otherwise supermarkets wouldn’t throw out old food. Anyway, I digress: Keen could easily address all of these criticisms, but for some reason he doesn’t. This is indeed a shortcoming of his book.
First, a brief note on Keen’s model of firm behaviour: it seems to make the error of maximising the growth rate of profits, rather than profits themselves. I am not sure if this has been fixed. Nevertheless, I regard it as subsidiary to Keen’s main criticisms. His most important model is the Minsky Model of banking and the macroeconomy.
Keen recently had a debate over his Minsky Model with the Cambridge economist Pontus Rendahl. Andrew Lainton has a post on this, along with a contentious discussion with Rendahl, over on his blog. In my opinion, Rendahl – though overly dismissive in tone, and not causing as many problems for Keen as he seemed to think – highlighted a number of issues with Keen’s model in its current form:
(1) Say’s Law holds. In Keen’s model, income is simply a function of the capital stock, and there is no role for demand.
(2) In what was generally a model set in continuous time, which used ODEs, there is an equation which uses discrete time intervals. Such equations cannot be solved in the same way, so Keen’s methodology is inconsistent.
(3) There is, as of yet, no role for expectations in Keen’s model.
(4) Rendahl argues that DSGE models are also Stock Flow Consistent (SFC). I think he is correct – see, for example, his own paper, which has agents accumulating stocks of money from previous periods. The major differences between SFC and DSGE appear to be: a lack of micro foundations; continuous functions; use of classes; market clearing; fulfilled expectations; and, of course, with Keen’s, the role of banks and private debt.
In terms of assumptions, I’d say Keen’s model is in the ‘heuristic’ stage – it’s not completely right and needs development. The criticisms are essentially things that have not yet been added to the model, rather than conceptual or logical problems (save the inconsistent equation). This means they can be added as it develops. However, if the model makes good predictions, it may prove to be useful, even though that should never serve as a barrier to making it more realistic and comprehensive.
Labour Theory of Value
If neoclassical economists want a lesson in how to respond to a critique you strongly disagree with without being vitriolic and dismissive, then they need look no further than the marxist responses to Keen’s critique of the LTV. This is all the more ironic given said economist’s willingness to dismiss marxists as illogical and dogmatic.
Keen’s critique is threefold, so I will discuss it briefly, followed by the marxist responses.
The first critique is Bose’s commodity residue. The idea is that no matter how far you go back in time, disaggregating a commodity into what was required to produce it, there will always be a commodity residue left over. Hence, no commodity can be reduced to merely labour-power. The problem here is the projection of capitalism into all of history. For Marx, a commodity only resulted from capitalist production. However, if you go back in time you will find non-capitalist production, and eventually you will be able to reduce everything into land/natural resources and labour, which Marx never defined as commodities. Having said this, one question remains: can the natural resources or land not be a source of surplus value? Could this surplus value not have been transferred into capitalist commodities?
Second is Ian Steedman’s Sraffian interpretation of Marx. Simply put, it seems Steedman had his interpretation wrong – Marx’s is not a physical, equilibrium system based on determining factor prices. This is something that actually struck me on the first read of Keen’s LTV chapter: Steedman simply converts Marx into Sraffian form without much justification. If Marx did not intend this to be the case, the criticism is defunct from the outset.Hence, it follows that Steedman’s model is simply a misinterpretation of Marx, and it is not even necessary to go into the maths. There is, of course, a possibility that this is an overly superficial interpretation and I am mistaken.
The third criticism is that Marx’s treatment of use-value and exchange-value is inconsistent: properly applied, it implies that a commodity’s use-value can exceed its exchange value, and hence be a source of surplus value. Now, I remain unsure of this area so I might be wrong in my exposition, but here is my attempt to explain the Marxist response: (warning: the following paragraph will contain a vast overuse of the word ‘value’ in what is already a necessarily convoluted explanation).
Marxists contend that Keen’s is a misinterpretation of use-value, which is simply a binary concept and not quantifiable. Something may have any number of uses which give it a use-value, which is a necessary condition for it to have an exchange-value. However, the exchange-value cannot ‘exceed’ the use-value, because the use-value cannot be measured. It is in this sense that labour is unique in Marx’s conception of capitalism: its specific use-value is the production of surplus for capitalists. It is the only ‘factor of production’ that can do this – after all, capital ultimately reduces to past labour value. If production could take place without labour, prices would fall to zero and, while Marx would be refuted, nobody would care because the problem of economic scarcity would vanish. Hence, surplus production and profits depend on labour producing more than it is rewarded.
I remain neither convinced of the LTV, nor of its critics.*** For me, most discussion of the LTV appears to rest on the LTV as a premise. The debate is split into people who accept the LTV and people who not only reject it, but see no need for it. For this reason, critics seem to misrepresent and misinterpret it continually – a common theme is to try and abstract from historical circumstance, when it’s clear Marx emphasised that his analysis only applied under capitalism, which he saw as a particular social relation. For me, the main issue remains the same as it is for other theories: what is the falsification criteria for the LTV?
Overall, a couple of points stand out for post-Keynesians for their own theories, both of value and economic systems. The first is that DSGE models are probably not that different to some heterodox models, and identifying the actual differences is crucial to opening up a dialogue between mainstream and heterodox economists.
The second is that I would caution left-leaning economists not to be too hasty to dismiss Marxism as dogmatic (in my experience marxists are anything but), or avoid it simply out of fear of being dismissed themselves. In my opinion, the LTV – while not entirely convincing – is a cut above the neoclassical ‘utility’ conception of value, and I’d sooner be equipped with Marxist explanations of a crisis when trying to understand capitalism. This isn’t to say post-Keynesians haven’t thought about Marx; moreso that the issue is often approached with a degree of bias. At the very least, the distinction between use-value and exchange-value is something that befits post-Keynesian analysis well.
So, as far as theory goes, this is the last post on Keen’s book. I will, however, do some closing notes from a more general perspective. As I said before, if there are any other criticisms of Keen that I have not covered, feel free to discuss them in the comments.
*It is worth noting that in my previous post I was somewhat – thought not totally – off the mark with my discussion of Keen on demand curves. The Gorman conditions for the existence of a representative agent do indeed have many similarities to the SMD theorem and conceptually they are dealing with the same issue: aggregation of preferences. Nevertheless, Keen weaves between the two, when it would have been more accurate to note economists have used two (main) different methods to get around the problem, and critiqued them separately. Similarly, though Keen’s quote from MWG was incorrect, it is true that economists such as Samuelson have used the assumption of a dictator to aggregate preferences. However, the specific one Keen presented was not right.
**However, that does not make it the same as chaos theory.
***For me, claims that worker ownership of production would be desirable don’t really rest on the LTV; instead, the simple point is that workers could employ capital themselves.
Naturally, mainstream economists have been critical of Steve Keen’s Debunking Economics. I will do a brief series within a series to try and respond to some of these criticisms. In this part, I will respond to some of the main critiques of neoclassical theory that have generated controversy: demand curves, supply curves and the Cambridge Capital Controversies. In the next post, I will respond to criticisms of Keen’s own models and his take on the LTV, as well as anything else that has attracted criticism.
Note that this post will assume prior knowledge of Keen’s arguments, so if you haven’t yet read my summaries above (or better still, Keen’s book), then do it now.
It seems there are some problems in this chapter. Keen mixes up some concepts and misquotes Mas-Collel. Having said that, he is broadly right. This is frustrating for someone on his ‘side,’ because it means mainstream economists can dismiss him when they shouldn’t.
Keen presents a quote from Mas-Colell where he assumes a benevolent dictator redistributes income prior to trade, and asserts that this assumption serves to ensure market demand curves have the same properties as individual ones. In fact, Mas-Colell is using this assumption to ensure that a welfare function, not a price relationship, will be satisfied. It remains true that a PHD textbook still assumes a benevolent dictator redistributes resources prior to trade, and subsequent economists have also used this assumption, which is not a great indicator of the state of economics. However, it was not an assumption used to overcome the Sonnenschein-Mantel-Debreu conditions.
More importantly (wonkish paragraph), it seems Keen lost some nuance in the translation of his critique to layman’s terms. He spends a lot of time talking about the Gorman polar form. This is about the existence of a representative consumer for a set of indirect utility functions (‘indirect’ because it calculates utility without using the quantities of goods consumed), but Keen makes out it is about the aggregation of preferences required for demand curves. Gorman is in many ways similar to, but not relevant to, the discussion of the aggregation of demand curves. Keen also argues that consumers having identical preferences is the same as them being one consumer, but this needn’t be the case: just because you and I have the same preferences, doesn’t make us the same person
Despite this, the competing wealth and substitution effects do create the conditions described by Keen. However, they only apply under general equilibrium – under which wealth effects are present – and not partial equilibrium – under which they are assumed away. Keen does not distinguish between the two.
In summary, Keen is correct that neoclassical economists could not rigorously ‘prove’ the existence of downward sloping demand curves. Keen himself says that it is reasonable to assume that demand will go down as price does, and classical economists were also content with this an observed empirical reality. Neoclassical economists themselves ended up having to defer to empirical reality when faced with the SMD conundrum and thus they had gained no insight beyond the classical economists, except to prove that their preferred technique – reductionism – does not work. For this reason, I interpret the SMD conditions primarily as a demonstration of the limits of reductionism (though some fellow heterodox economists might disagree).
The proposition here is pretty simple: a participant in perfect competition will have a tiny effect on price. This is small enough to ignore at the level of the individual firm, which is neoclassical economist’s main defence. However they ignore that, as Keen says, the difference is both “subtle and the size of an elephant.” Once you aggregate up a group of infinitesimally small firms making incredibly small deviations from maximising profits, you get a result that is far away from the one given by the neoclassical formula. Result? We must know the nature of the MC, MR and demand curves to know both price and quantity, just as with a monopoly. The neoclassical theory – at this level – has no reason to prefer perfect competition to a monopoly, and a supply curve cannot be derived. From what I’ve seen, the critics ignore the effect of adding up tiny mistakes, instead focusing on how tiny they are on an individual level.
Economists have some other defences, but I interpret them as own goals. For example, there is the argument that, under perfect competition, firms are price takers by assumption. They cannot have any effect on price, by assumption. But this basically amounts to assuming the price is set by an exogenous central authority, which is odd for a model of perfect competition.
Another argument is that setting MC=MR itself is an assumption. This is a strange path to take for a theory that prides itself on internal consistency and profit maximisation. It acknowledges that MC=MR will not quite maximise profits, so amounts to the assumption that firms are not profit maximisers. There is also the similar argument that firms don’t take Keen’s problems into consideration in real life, so they don’t matter. This is a huge own goal, given most textbooks argue that it doesn’t matter what firms do in real life. I’m quite happy to acknowledge it does matter how they actually price – but that would involve abandoning the marginalist theory of the firm and using cost-plus pricing.
So, now that we have all finished discussing how many angels can dance on a pinhead (turns out it was slightly fewer than economists thought), let’s just start using more realistic theories of the firm and forget the mess that is marginalism.
Cambridge Capital Controversies
There are swathes of literature on this and I cannot hope to explore them all. The main thing I have noticed, and want to discuss, is that economists only seem to focus on capital reswitching when discussing this, and defer to empirical evidence to suggest it is negligible. I have a few problems with this:
(1) Empirical evidence is competing and some evidence suggests reswitching is more common than economists would like to think. Furthermore, it is incredibly hard to observe and therefore cannot be dismissed so easily.
(2) Most importantly, the Capital Controversies were not just, or primarily, about reswitching. Sraffa showed a number of things: demand and supply are not an adequate explanation for static resource allocation; the distribution between wages, profits and other returns must be known before prices can be calculated; factors of production cannot be said to be rewarded according to ‘marginal product’. For me these are more important, and are applicable to many models used today, such as Cobb-Douglas and other production functions, and the Solow Growth model.
With all 3 of the examples I have discussed, economists have tried to defer to empirical evidence to dismiss the problems with their causal mechanics. But generally economists do not regard empirical evidence about causal mechanics as important (the primary example being the theory of the firm), instead insisting on rigorous logical consistency. Surely, in order to be completely logically consistent, economists should at least be willing to experiment with the potential effects of SMD and reswitching in general equilibrium models and see what happens? Robert Vienneau has various discussions of this.
The common thread between these is that economists seem incredibly adept at assuming their conclusions. Of course, you can get around any critique with an appropriate assumption, but as I’ve discussed, theories are only as good as their assumptions and assumptions should not be used simply to protect core beliefs and come to palatable conclusions. Having said that, Keen’s book isn’t perfect (which is to be expected if you try and take every aspect of economics on in one book), and there are worthwhile criticisms out there. Nevertheless, Keen’s critique as a whole remains in tact, and leaves very little of what is taught on economics courses left in its wake.
P.S. Feel free to use the comments space to discuss any critiques of areas I have not covered/said I will cover.
The final chapter Steve Keen’s Debunking Economics is a brief overview of the major competing alternative economic schools of thought. The question posed is whether these schools present a viable alternative to neoclassicism and marxism, both of which Keen has already dismissed. He now goes on to evaluate Austrian, Sraffian, post-Keynesian and evolutionary economics, as well as Econophysics. I will look at Keen’s evaluation of these schools of thought and discuss his conclusions.
Keen’s view on the Austrian school is similar to that of myself and other post-Keynesians: it shares many characteristics with neoclassicism. These include but are not limited to: an exogenous money supply (ex. Lachmann and Schumpeter), Say’s Law, a variant of marginal productivity theory, reductionism and a government versus markets perspective. Hence, many of his earlier critiques – Sraffa’s work on capital, the excessive focus microeconomics, and post-Keynesian views on banking and the money supply – could equally be applied to Austrians. Keen himself thinks that Austrians deal with uncertainty, but (again, ex. Schumpeter and Lachmann) I’m not even sure this is true – for example, Hayek completely misused the term. Hence, criticisms of neoclassical models based on irreducible uncertainty may also apply to some Austrian arguments.
While Keen applauds Austrians’ analysis of capitalism as more dynamic than that of neoclassical economics, he notes that they do seem to retain the belief that capitalism has a ‘natural’ state that should not be ‘interfered with,’ and they actually seem to take it much further than their neoclassical counterparts. This is particularly apparent – also something that I have noted – with Hayek’s ‘spontaneous order.’ Though it is an interesting concept, it has been misused as an ideological tool against government, without considering the ‘spontaneous order’ that may evolve inside government, or the possibility the dichotomy between governments and markets may be a false one.
All in all, it’s hard to deny Austrians are part of the marginalist tradition, something Mises explicitly said. Hence, I don’t consider the school a truly ‘alternative’ way of thinking about economics, even if it has something to offer.*
Keen praises Sraffa’s work as “the most detailed and careful analysis of the mechanics of production in the history of economics,” and notes the importance of the interesting conclusions that it brought to light. Nevertheless, Sraffa’s analysis is a static one that seems to be dependent on the existence of a long run equilibrium (here Keen quotes the Sraffian Ian Steedman as evidence). Due to the lack of dynamism in Sraffian models, Keen’s previous comments about dynamics and equilibria could be applied to Sraffa. Keen ends by noting the subtitle of Sraffa’s magnum opus: “Prelude to a critique of economic theory.” He suggests Sraffa’s main aim was to provide a basis with which to critique other theories, rather than present a positive alternative. I’ve no doubt Sraffian readers will disagree.
This school of thought is characterised by the application of modern chaotic modeling techniques to economics. Hence, the models produced are far better suited to generating the kind of instability we observe in capitalist economies than are those used in neoclassical economics. Keen comments that the school isn’t really a direct critique or challenge of neoclassical economics, instead dismissing it outright and presenting an alternative.
Bearing the lack of direct engagement with economists in mind, it’s not surprising that the physical scientists suffer somewhat from a curse of being a mirror image of economists. Keen says that they have been rediscovering old insights such as IS-LM, then using them with other, incompatible models such as rational expectations. They also seem to have an ‘everything looks like a nail when you have a hammer’ problem, and are applying inappropriate laws, such as conservation to the distribution of wealth, or electromagnetism to immigration.
Perhaps econophysicists should be more willing to read through the history of thought – as I noted in my post on mathematics, this type of imperialism/arrogance in physicists is no prettier than in economists (commenter Blue Aurora told me that some econophysicists have been more willing to engage with the discipline recently, which is a good development). Despite these flaws, the tools of modern chaotic modeling are surely a promising area for the future of economics.
Keen’s discussion of this field is the first time I have been properly introduced to it, so I’ll be brief. Keen seems to think that evolutionary science is an appropriate and promising field, but one that lacks maturity. Many evolutionary concepts, such as adaption and survival of the fittest, are surely applicable to capitalist firms and product evolution. Having said that, economics lacks the equivalent of the gene to ground the evolutionary approach, so many evolutionary models are often forced to rely on analogy. Perhaps – and hopefully – the evolutionary school will be able to establish a coherent grounding in the future, but for now it is not a strong enough alternative to neoclassicism.
As this is the school Keen and I both most closely align with, you’ll not be surprised to hear the many advantages we think it has to offer: dealing with uncertainty; the relative lack of ideological commitment to any particular system; paying sufficient attention to money, debt and banking; more reality based models of the firm; freedom from reductionist constraints, and much more.
The main problem with this school is the lack of coherency. It’s almost defined as ‘not neoclassical economics’ (and, Keen might add, not Marxism either). Post-Keynesiansism does not really have an agreed upon methodology, something that has worked against its status as a fully fleshed out alternative.
As a brief aside: personally I don’t see why class shouldn’t be adopted as the ‘official’ methodology of post-Keynesians. It is compatible with many of the core tenets of the school – for example, the idea that individual actions should be understood in their class context fits in with the post-Keynesian idea that microeconomics should have ‘macrofoundations.‘ Furthermore, there is also an element of ‘reclaiming classical economics’ to post-Keynesianism, and the classicals generally used class as a methodological starting point. Finally, many of the models – including Keen’s – already use classes as agents, so it seems like a natural progression.
Overall, it seems post-Keynesianism is simply less rigid and more reality-based than its neoclassical counterpart, and is more fleshed out than other alternatives, save a problem with a unified methodology.** Although I suggested that this methodology should be class, perhaps – and this something to which Keen alludes – the lack of a rigid methodology is a strength rather than a weakness. Viewed from this angle, post-Keynesian economics can accept and develop concepts from all of the alternative fields (as well as institutional economics, which Keen doesn’t mention). This also solves the ‘divide and conquer’ problem – part of the reason for neoclassicism’s dominance seems to be the splits between its rivals, which as you can see are many. Generally, I think cooperation between the alternative schools of thought may be the key to building a robust alternative to the curiously resilient school of neoclassicism.
*Obviously there are strong divides within the Austrian school. Rothbardianism is barely worth exploring, while Hayek and Mises have some insights but were fairly tainted by the government-market dichotomy. As I have noted above, Schumpeter and Lachmann seemed the most willing to abandon certain pretenses and come to interesting conclusions.
**Actually, judging from the constructive comments on my marxist economics post, I have more faith in marxist economics than does Keen. However, I will need to explore it more fully before I can come to a definitive conclusion.
Yes, yes, I know I’m far from the first person to use the pun in the title.
Chapter 17 of Steve Keen’s Debunking Economics is a rejection of the Marxist Labour Theory of Value (LTV), and with it the most generally accepted analytical form of Marxism. However, Keen does not reject Marx’s ideas outright, instead suggesting and praising an alternative interpretation: one shorn of the LTV, the tendency for the rate of profit to fall, and hence the inevitably of socialism.
Note that this is my first formal introduction to the LTV, so I can’t claim to know the subject in much depth.
The LTV suggests that labour is the only true source of value, as it is the only factor of production that can ‘add’ more than its cost. This can be demonstrated by the simple observation that workers produce more than workers receive in wages. Marx called what workers produced ‘labour power’ and what workers were paid ‘necessary labour time.’ The difference labour power and necessary labour time is the surplus, and the ratio of the surplus to the necessary labour time is the Surplus Value (SV). The rate of profit, on the other hand, was the surplus over the necessary labour plus other inputs (capital).
Because a similar distinction between ‘commodity power’ and ‘commodity’ could not be made for anything else, capital could not produce more than the value that went into it, but labour could. This meant that a higher ratio of machinery to labour would mean less SV for capitalists. Marx argued that over time, capitalists would replace labour with machinery (something they obviously like to do), so SV – and with it the rate of profit – would decline. This would lead to an attempt by capitalists to push down wages and eventually a socialist revolution.
Marx ran into some theoretical problems with this story. The most famous is the Transformation Problem. This arises because capitalists do not care about the rate of SV, but the rate of profit. Marx had already assumed that the SV was constant across industries. Following this logic, a more labour intensive industry would have a higher rate of profit than a more capital-intensive industry, and capitalists would continually move from more capital-intensive to more labour intensive industries in search of higher profits. This complicates the story behind the tendency for the rate of profit to fall.
Marx tried to solve this by arguing that capitalists do not secure only the SV accrued from their own industry, but that they are effectively stockholders in a joint enterprise that comprises the entire economy. Hence, SV and the rate of profit could both be constant between industries. He provided a numerical example to demonstrate that this was feasible: tables showing the various rates of profit, production and surplus, with the rates of profit and surplus uniform between industries. Marx’s example was mathematically correct – in that everything added up – but really it was nothing more than a snapshot of a particular point in time that may or may not have been reality.
At this point Keen channels Ian Steedman’s critique of Marx, which builds on Sraffa’s analysis in Commodities. Steedman starts with a Sraffian economy in which the various industries have to produce enough for the total inputs in the next period (i.e. enough to ‘reproduce’ the entire economy). He tries to convert the inputs and outputs into Marxian ‘values’ based on labour power and SV. From this, he derives output values and converts them into prices. However, he then runs into problems: what starts as an equilibrium destablises and rates of profit diverge, sometimes increasing.
So what happened? Steedman simply concluded that the entire idea of going values to prices was bunk – in his hypothetical economy, it was possible to calculate prices independently of any ‘theory of value,’ as did Sraffa. Sraffians believe that the ‘transformation problem’ is nonsensical and production should not be analysed from any perspective of utility or value, but from physical quantities and reproduction of industry. Note that this doesn’t necessarily imply that capital doesn’t exploit labour somehow; more so that Marx took a wrong turn in justifying this idea.
So it is hard to tell a consistent story that builds from labour value and ends up with a falling rate of profit and a uniform, economy-wide SV. Marx attempted to justify it with a special case snapshot, but Steedman showed there was no reason to expect the economy to be in or remain in this state, and no need to invoke ‘value’ in the analysis at all.
Furthermore, there is another significant problem with Marx’s theory of value in and of itself, one that he seemed to acknowledge elsewhere. The very premise that labour is the only source of value can be subjected to an incredibly simple, powerful critique.
Classical economists, including Marx, used to distinguish between two features of a commodity: the ‘exchange value‘ – what it sold for on the market – and the ‘use value‘ – how much it is worth to the buyer. Clearly, though, if this is true of commodities, then one can have a higher use value than exchange value, and hence can be a source of SV for a capitalist. This is a neat observation that can make Marxism a highly appealing analytical framework with which to analyse capitalism, one with the modification that socialism is not inevitable (even if it may be desirable on other grounds).
So, the LTV is quite hard to defend: Marx had to make some arbitrary assumptions that don’t seem to hold; his supposed equilibrium in which the rates of SV and profit would be constant turned out to be unstable; his premise contradicted his own distinction between use value and exchange value. Having said all this, Keen thinks that Marxism is stronger once it is rid of the LTV, and that Marx’s broader analysis of commodities and production is still a highly illuminating framework with which to analyse capitalism.
Chapter 16 of Debunking Economics is a short comment on the use of mathematics in economics. Keen offers a defence of maths itself, suggesting that it is neoclassical economist’s misuse of the tool, rather than the tool itself that has caused the problems in economics today. He compares it to the story of a king who hears an awful tune played on the piano, and proceeds to shoot the piano.
Keen first recaps on some of the mathematical mistakes he has discussed throughout the book, such as the problems with demand and supply curves. I won’t go over these again here – that would be a summary of a summary – but will instead briefly note a couple of general problems with economist’s use of mathematics.
First, it seems economists are not ready to acknowledge the limits of mathematics: mathematicians have known for some time that some equations simply cannot be solved, or are incredibly difficult. Since economists are often dedicated to proving the existence of an equilibrium, they have to stick to overly simplistic analysis, where equations can definitely be solved. This causes them to rely overly on linear models.
Second, Keen makes a pithy mathematical observation about emergent properties and reductionism. Reductionism can be characterised as reducing something down to its component parts. However, if these component parts are multiplied together – rather than added – as you aggregate up, you will see a substantial change in behaviour at the aggregate level. Hence, reductionism has clear and obvious limitations.
Overall, I agree with Keen that mathematics is useful in economics. Jevons put it most accurately when he said “[economics] must be mathematical, simply because it deals with quantities.” However, this shouldn’t mean quantifying things with erroneous measures – such as capital – just for the sake of mathematics. Equations have to have clearly defined parameters, can only be considered as good as their assumptions, and may not have clear implications. Such is the nature of modelling complex systems.
Update: I was going to leave this out for fear of digressing, but a couple of the comments reminded me of a quote Keen used to end the last chapter:
The real problem with my proposal for the future of economics departments is that current economics and finance students typically do not know enough mathematics to understand (a) what econophysicists are doing, or (b) to evaluate the neo-classical model (known in the trade as ‘The Citadel’) critically enough to see, as Alan Kirman  put it, that ‘No amount of attention to the walls will prevent The Citadel from being empty’. I therefore suggest that the economists revise their curriculum and require that the following topics be taught: calculus through the advanced level, ordinary differential equations (including advanced), partial differential equations (including Green functions), classical mechanics through modern nonlinear dynamics, statistical physics, stochastic processes (including solving Smoluchowski-Fokker-Planck equations), computer programming (C, Pascal, etc.) and, for complexity, cell biology. Time for such classes can be obtained in part by eliminating micro- and macro-economics classes from the curriculum. The students will then face a much harder curriculum, and those who survive will come out ahead. So might society as a whole.
This is from the (econo)physicist Joseph McCauley. It’s an interesting reversal of roles for economists, who often label critics as mathematically illiterate. Having said that, I think McCauley’s attitude shares some of the same characteristics that I hate to see in economists.
The Efficient Markets Hypothesis is pretty much indefensible. It is based on ridiculous assumptions: all investors have access to money at the same interest rate, have the same information and interpret information in the same way. It also has counterfactual implications: according to the EMH, markets would stay in equilibrium and move only when new information became available (which they don’t); people would not consistently outperform the market (which they do); and in its strongest form, it actually implies that bubbles can’t exist. The only defence its proponents seem to be able to muster is that it can’t predict anything (and sometimes, that economists full stop can’t predict anything). I could go on, and have, as have others. But what’s more important is exploring the many available alternative theories of finance. This is the purpose of chapter 15 of Steve Keen’s Debunking Economics. Keen goes through and assesses the major alternatives to the EMH on by one.
First, Keen mentions the obvious choice: behavioural finance. But he doesn’t really explore all the different heuristics and biases that people experience in financial markets – that would and has taken entire books. Instead, he objects to the way that EMH proponents initially defined ‘rationality.’ Apart from basically meaning prophetic, it was based on a misreading of John von Neumann, the creator of Game Theory, who said that his definition of rational would only apply when games were repeated enough times. A game with an unlikely but large loss as one of the possible outcomes looks less appealing when you do it once than if you play it 1000 times, allowing the losses and gains to even out.
Hence, Keen touches on something that others have mentioned: the whole idea that behaviour is either ‘rational’ or ‘irrational’ is not a useful way to think about human behaviour. In fact, behavioural finance retains some unfortunate implications carried over from economics: that we need to reduce everything down to individuals making choices, and that if only people behaved how economists think they should, then financial markets would be efficient. Having said that, behavioural finance is promising and useful field, though so far it is still in its early stages with no clear forerunning theories.
There are, however, a few theories which have been fully developed, and look incredibly interesting. The additional bonus is that they are complementary to each other (and to behavioural finance).
A fractal is a pattern that looks the same no matter how much you zoom in or out (see above). So it’s no surprises that one of the implications of the fractal markets hypothesis is that markets display similar patterns of behaviour over a day, month, business cycle or what have you. The fractal markets hypothesis models price movements as a function of previous price movements, which explains the emergent fractal pattern, and also means that stock markets will exhibit a tendency for volatility to produce more volatility, something contrary to the EMH.
A skeptical reader might suggest that this implies future price movements are easy to predict, if only one had the relevant formulas. But a system as complex as this would be highly dependent on initial conditions: just a tiny error in the initial values would soon produce results that were wildly offbase. This is what happens with weather models, and is why weather predictions are more likely to be right the closer you are to the day. It is actually probable that calculating prices accurately ould be computationally impossible using a fractal model.
But this might beg another question: why is the stock market not more chaotic? This is explained by dropping one of the assumptions of the EMH: that investors trade with identical time horizons. Similarly to von Neumann’s observations, a trade that looks bad for a day trader due to large potential losses at any one time, could look good for a long term trader if it has net positive yields over a given period. Hence, introducing heterogeneity makes the model more realistic. A highly promising theory.
The Inefficient Markets Hypothesis (IMH)
Provocatively named by its originator Bob Haugen, who has written three books full of data contrary to the EMH. The IMH suggests that markets systemically overreact to price movements, and hence cause incredibly inefficient allocations of resources.
Haugen identified three sources of price movements: event-driven, error driven and price driven. The EMH assumes away the second two, but Haugen has calculated that the third one accounts for up to 95% of stock market volatility, because price movements create a self-perpetuating spiral as investors seek gains or cut losses. Haugen has concluded that the stock market in its current form is a serious drag on investment, and suggested reducing the length of the trading day or simply having one auction per day.
Physicists have recently turned their hand to economics, and, due their strong empirical bent and the relative lack of data in economics, have been drawn to finance, where streams of data are readily available. Keen comments that much of Econophysics would perhaps be better named ‘Finaphysics.’
There has been a plethora of suggested approaches from the physicists, mostly applying their various chaotic theories to economics: earthquake models, power laws, the Fokker-Planck model. Keen does not go into much detail here because, again, it would take an entire book. He briefly goes over Didier Sornette’s earthquake model, which has been used to make explicit predictions about the future of stock markets. Keen directs the reader to this website, which supposedly tracks its predictions, though I cannot find anything after a quick look.
So there are many alternatives to the EMH, and each involve making explicit predictions and drawing on data, rather than handwaving ‘the market is volatile we can’t do anything about it’ statements. Personally, I consider the fractal markets hypothesis the most promising framework, and it is also one that can easily incorporate elements from the other approaches. I look forward to future developments in all of thee theories.
In chapter 14 of Debunking Economics, Steve Keen walks us through the macroeconomic model he has developed in recent years, and discusses the implications its conclusions might have for policy. The model is in its early stages, and Keen himself says there are “many aspects of the model of which [he is] critical.” Nonetheless, it is a promising start to developing an alternative to the dominant DSGE method.
Keen’s is a model of a pure credit economy, with three aggregated agents: workers, firms, and bankers. Instead of focusing on preferences, individuals and market clearing, it focuses on the flow of funds between different sectors. Bankers create their own money in the form of loans*, which at this stage they are only allowed to lend to firms. The firms pay the workers and the interest on the loans, whilst the bankers and workers consume the output of the firms.
The crucial sector here, is, of course, banking, which few neoclassical models include explicitly, as they believe finance plays the role of intermediation between savers and borrowers. However, in Keen’s model the banks are central. He disaggregates them into several accounts: a vault in which to store notes; a safe into which interest is paid and out of which bankers are paid; a loan ledger; firm deposits and worker deposits. The flows between the various agents and accounts are then determined by some arbitrary coefficients, which Keen uses simply to determine whether the model will ‘work’ (i.e. not break down). Each flow (e.g. wages; consumption) is determined by a constant times a stock (e.g. firm’s deposits; worker’s deposits).
This is the point at which economists might scream ‘Lucas Critique,’ but Keen’s comment from an earlier chapter, that is absurd to suggest that any change in policy will have the effect of neutralising arbitrary parameters, applies. Furthermore, there is no theory that is policy-independent, so whilst we must examine the relationship between policy and reality, we cannot render our models immune to it by micro founding them. In any case, the model is in its early stages, and there is plenty of room for adding complexity.
Keen uses this basic model to explore what effect bank bailouts will have. Quelle surprise, bank bailouts have the effect of increasing loans slightly, and benefitting bankers, but don’t do much for the real economy. Conversely, bailing out firms and workers creates a better result for everyone except…the bankers! A small data point can be found in support of this in Australia, where the government bailed out everyone over 18 with $1000, and the economy has performed better than those where the banks have been bailed out. Obviously there are a multitude of conflicting factors, but Keen’s hypothesis does not seem at all unreasonable when taking recent events as whole.
The interesting thing about Keen’s model is that a ‘Great Moderation’ and a ‘Great Recession’ are simply two parts of the same debt-driven process, rather than a ‘black swan’ or some other such event. Debt to GDP rises exponentially in a period of relative tranquility, and this is followed by a huge crash and mass unemployment. A substantial part of this difference from the core DSGE models is created simply by adding banks as explicit agents.
Keen has since developed his model further – he has included ‘Ponzi’ lending, sticky wages/price (which actually stabilise the economy) and a variety of other factors. There is plenty of scope for adding more to the model, such as an exogenously set interest rate with a central bank, but for now the core alone seems to be able to generate behaviour that closely resembles that of a capitalist economy: one prone to cyclical breakdown and intermittent financial crises, not due to any particular ‘friction,’ but due to the inherent characteristics of the system. This should be enough to get any empirically driven economists to pay attention, whether they agree or disagree with the mechanics of the model.
As many readers of this blog will know, Steve Keen is generally the economist credited with best foreseeing and warning about the 2008 financial crash. The 13th chapter of his book is dedicated to showing why his framework foresaw it, and what he did to warn of the coming crisis.
I have seen a few people saying that Keen didn’t really predict the crisis, and what predictions he did make were ‘chicken little’ predictions – repeating “there will be a crisis” until there was one. This is simply not true.
He certainly had the appropriate framework to foresee the financial crisis. His 1995 paper on Minsky and financial instability contains a model prone to endogenous fluctuations, and he concludes that any period of tranquility in a capitalist economy should not be accepted as anything other than a lull before the storm.
The key ingredient in Keen’s framework is, of course, private debt. Since banks create credit ‘out of nothing,’ new private debt adds to nominal aggregate demand. It follows from this that aggregate demand is current income plus the change in debt. I will quote Keen’s numerical example in full to explain why:
Consider an economy with a GDP of $1000 billion that is growing 10% per annum, where this is half due to inflation and half due to real growth, and which has a debt level of $1250 billion that is growing at 20% per annum. AD will therefore be $1250 billion: $1000 billion from GDP, and $250 billion from the increase in debt.
Imagine that the following year, GDP continues to grow at the same 10% rate, but debt growth slows down from 20% per annum to 10%. Demand from income will be $1,100 billion – 10% higher than the previous year – while demand from additional debt will be $150 billion.
Aggregate demand this year will therefore be $1250 billion – exactly the same as the year before. However, since inflation is running at 5%, that will mean a fall in output of 5% – a serious recession. So just a slowdown in the rate of growth of debt can be enough to trigger a recession.
For an economy to grow, either income must increase or private debt must
increase at an increasing rate accelerate; this means that even a slowdown in the rate at which debt is decreasing can create a recovery (as with the US in 2010). The higher the level of private debt relative to income, the more dependent the economy becomes, and the more vulnerable it can be to even a mild slowdown in the rate of change of debt. Thus, in the mid 2000s, when Keen looked up the levels of private debt in developed economies, he was taken aback by the exponential increase:
At this point he went public – most of the evidence for his warning of a coming crisis is from the blog he started, and the monthly reports he released on there, tracking the level of private debt and explaining why it mattered. These reports first analysed the Australian, and then the US economy. He also spoke at a number of events, as well as a few TV and radio appearances which I cannot find online (although the media didn’t really start to take notice until the crisis began).
As a brief aside, I’ve seen a few people mention his failed prediction of the Australian housing crash, and his subsequent having to take a long hike. It is true that he got this one wrong, but there is quite an easy explanation: the government injected a large amount of money into the housing market in the form of first time buyer grants. Coupled with Australia’s resource boom, and the demand from China, this has kept their economy afloat so far.
So the charge that Keen did not predict the crisis, or simply shouted ‘there will be a crisis’ for 10 years until there was one, is false. He has a clear analytical framework that has performed incredibly strongly empirically, both before and throughout the crisis, and he got the dates approximately right (he said 2006). In my next post I will take a more in-depth look at his models and their implications for where we are now.
Chapter 12 of Steve Keen’s Debunking Economics is a critique of a pervasive neoclassical interpretation of the Great Depression (and, by extension, the Great Recession): that the severity of the downturn can be attributed to contractionary monetary policy at the Federal Reserve. As you’d expect, this doesn’t mesh well with the endogenous money theory to which Keen (and I) subscribe, which says that the money supply is largely controlled by private banks.
Keen begins by cataloging Ben Bernanke’s evaluation of the Great Depression, which built on Friedman and Schwartz’ A Monetary History of the United States. From the quotes Keen presents, Bernanke’s offering really seems to be neoclassical economics at its reality denying worst:
the failure of nominal wages (and, similarly, prices) to adjust seems inconsistent with the postulate of economic rationality
[On Minsky’s FIH] I do not deny the possible importance of irrationality in economic life; however it seems that the best research strategy is to push the rationality postulate as far as it will go.
Having said that, the case that the Federal Reserve exacerbated the Great Depression by contracting the money supply does have some substance to it, and is worth discussing.
There is no denying that the Federal Reserve contracted base money at the onset of the Great Depression. But it is a leap to suggest this was the primary cause of the prolonged slump – firstly, the contraction of base money was less than 2% on average. Secondly, there has been one other occasion where base money has contracted nominally (1948-50), and six other occasions where it has contracted when adjusted for inflation. All of these – bar one – were correlated with recessions, but none were correlated with depressions.
Keen presents a graph showing a complete lack of correlation between unemployment and M0:
Instead, there is a much clearer correlation with broader, credit-based measures of the money supply:
From this, it’s quite clear from that the primary cause of the Great Depression was a collapse in aggregate demand, caused by a contraction of credit by private banks. Bernanke and other neoclassical economists are reluctant to accept this conclusion, because it conflicts with the neoclassical vision of the economy as inherently stable, bar perhaps a few frictions, and also renders invalid many of their preferred modelling techniques. For someone like Friedman, the conclusion is simply unacceptable, because it conflicts with his insistence that ‘the government’ is the cause of most significant problems.
The money multiplier, again
Keen catalogues the evidence against the money multiplier story that lies behind Bernanke and Friedman’s interpretation of the Great Depression. In this story, the Central Bank (CB) expands reserves, and private banks then make loans, keeping a fraction of the reserves so that they can accommodate demand for money from customers. These loans are then deposited, and a fraction is kept, and this process continues until no more can be lent out. The amount of loans is that inverse of the fraction banks are required to hold as reserves.
What are the problems with this story? First, the observed reality in banks is that they create loans and deposits simultaneously, and as such do not require reserves before they lend. Second, the change in credit and broader measures of the money supply precedes changes in reserves, rather than the other way around. Third, the failure of monetarism – a disastrous policy used in the 1980s where the CB tried to stabilise money growth, but consistently overshot their target. Fourth, Bernanke’s increases in base money during 2008, which resulted in little to no change in economic activity:
For a more in-depth treatment of the money multiplier by Keen himself, see here.
It is worth noting that it is strictly true that the CB controls base money, and as such some might interpret the endogenous interpretation as one of policy. But the fact is that endogenous money reflects the reality of capitalism: firms need capital before they make sales, and banks must accommodate this to keep the economy moving. The CB – though it has some discretion – simply has to play the role of passively accommodating endogenous activity, otherwise capitalism will not work.
Onto the Great Recession
Keen ends the chapter by documenting a few papers that have attempted to understand the Great Recession – McKibbin and Stoeckel (2009); Ireland (2011); and Krugman and Eggertson (2011). The first two, unfortunately, do not even attempt to create a role for private debt. Instead, the recession is due to a series of external shocks – such as changes in preferences and technology – whilst its length can be attributed to factors such as wage and price rigidity, which get in the way of capitalism’s underlying tendency to stability.
Krugman and Eggertson’s, on the other hand, commendably notices how important private debt seems to be, but only gets as far as modelling it as a special case, in which ‘patient’ agents save, and ‘impatient’ agents borrow. In some ways this observation is true – when money is paid back, it disappears into extremely ‘patient’ agents: banks, who have an MPC of 0. However, banks create rather than save this money, and hence it is added to aggregate demand. This process is, unfortunately, something Krugman says he “just doesn’t get.”
Ultimately, Krugman’s paper is the same story as the others: a one-off event, imperfection, special case, creates a problem in an otherwise stable economy. All three papers fit Bob Solow’s characterisation of New Keynesian models – they fit the data better because economists add “imperfections…chosen by intelligent economists to make the models work better.” All briefly reconsider building new theories from scratch, before simply reasserting the neoclassical core. There really needs to be more soul-searching from economists than this.