Posts Tagged Alternatives to DSGE
This is the final part in my series on how the financial crisis is relevant for economics (here are parts 1, 2, 3, 4, 5 & 6). Each part explores an argument economists have made against the charge that the crisis exposed fundamental failings of their discipline, with the quality of the arguments increasing as the series goes on. This post discusses probably the strongest claim that economists can make about the crisis: they do understand it, and any previous failures were simply due to inattention or misapplication, rather than fundamental problems with the theory itself.
Argument #7: “Economists had the tools in place, but we overspecialised and systemic problems caught us off guard.”
Raghuram Rajan was probably the first to take this sort of line, arguing that overspecialisation prevented economists from using the tools they had to foresee and deal with the crisis. But while Rajan’s piece also made a number of other criticisms of economics, over time the discipline seems to have reasserted this argument more strongly: not too long ago, Paul Krugman argued that although “few economists saw the crisis coming…basic textbook macroeconomics has performed very well”. Similarly, Tim Harford claimed at an INET conference last year that the tools necessary to understand the crisis already existed in mainstream economics, and the problem was simply one of knowing when and how to use them. He compared financial crises to engineering disasters, which were understandable using current knowledge but happened nonetheless, due to negligence or oversight on the part of the engineers.
So how true is this claim? Certainly, a number of economic models exist for understanding things like panics, liquidity problems and moral hazard. The most well known of these are the Diamond-Dybvig (DD) model of bank runs – which shows what happens when banks have liquid liabilities (such as demand deposits) which must be available at any time, but have illiquid assets (such as loans) which are not fully convertible to cash on demand – and the Akerlof-Romer (AR) model of financial ‘looting’, which shows that deposit guarantees may create moral hazard as investors gamble other peoples’ money. If you combine tools like these, which help us understand the financial sector, with tools like IS/LM, which tell us how to escape a downturn once it happens, in theory you have a pretty solid set of tools for dealing with the recent crisis.
The first objection I have to these models is that many of their insights could be considered trivial, or at least common sense. The DD model came to the conclusion that deposit insurance might be helpful way to prevent bank runs, which is hardly a revelation considering it came 50 years after FDR and the general public figured out the same thing. The AR model came to the conclusion that deposit insurance and limited liability might create perverse incentives as banks gamble ‘other peoples money’, which again must have been obvious to the policymakers who put Glass-Steagal and other financial regulations in place. Perhaps this point is a little harsh, and I don’t want to overstate it: on the whole, these papers are asking important questions, and in the case of AR they answer them well. Nevertheless, there’s no point in economic theory if it can’t tell us things we didn’t already know. Even the idea that central banks should provide emergency liquidity to banks in trouble is quite obvious, and it predates modern economic theory by a good while.
However, this is not the most important point. The issue I have with these models is that in many of them everything interesting happens outside the model. In Krugman’s favoured IS/LM, a ‘crisis’ is represented by a simple shift in the IS curve, which in English means that a decline in production is cause by…a decline in production. Where this decline came from is presumably a matter for outside the model. Even the most sophisticated macroeconomic models often follow a similar tack, merely describing what happens when the economy suffers from a shock, without exploring possible causes for the shock. Likewise, the DD model suggests bank runs happen because everyone panics, but what causes these panics is not explored: it is assumed depositors’ expectations are exogenous, whether fixed or following a stochastic (random) pattern. Yet studies such as Mishkin (1991) find that bank runs generally follow periods of stress elsewhere in the economy, a fact which DD simply cannot capture.
Economic models are narrowly focused like this because they are generally designed to answer straightforward questions about causality: does the minimum wage cause unemployment; does expansionary fiscal policy cause growth; does a mismatch between illiquid assets and liquid liabilities cause bank runs. But the crisis was an endogenously generated process in which different aspects of the economy – the housing market, the financial sector, government policy – combined to create something bigger than the sum of its parts, and in which it is not possible to isolate a single cause. Consider: the collapse of Lehman Brothers may have triggered the worst of the crisis, but was it really to blame? The economy was already in a fragile place due to systemic trends that can’t necessarily be traced to a single law, institution or actor. Just like the murder of Franz Ferdinand in World War 1, we have to look beyond the immediate and focus on the general if we truly want to understand what happened.
To sum up, the economists above want to argue that they are only culpable insofar as they overspecialised and failed to focus on the right areas in this particular instance. However, the reason for this was not just because of personal myopia; it’s because their chosen methodology means they lack the tools to do so. A model of one aspect of the economy which takes the effect of other areas as exogenous will fail to detect potential positive feedback loops and emergent properties. A model which takes the crisis itself as an exogenous ‘shock’ is even worse, and in many ways is hardly a model of the crisis at all, since it offers no understanding of why crises might happen in the first place. Are there alternatives? I have previously written about how post-Keynesian and Marxist models offer more comprehensive understandings of the financial crisis and antecedent decades; I shan’t repeat myself here. Other promising areas include network theory, evolutionary economics and Agent-Based Modelling. All of these share that they take the system as a whole instead of focusing on isolated mechanics.
I see the crisis in economics as a shock (!!) which hits macroeconomics hard and reverberates throughout the discipline. Regardless of the pleas of some, such events can be seen coming, and they cannot be handwaved away as part of an overall upward trend. And even if individual economists are not in control of policy, key economists have substantial influence, not to mention the theories and ideas in economics as a whole. Recent developments in macroeconomics still leave a lot to be desired, while previously existing tools suffer from similar problems: a lack of holism; a wooden insistence on microfoundations; and attempt to understand everything in terms of simplistic causal links, often relative to a frictionless baseline. Finally, although many areas of economics are not directly indicted by the crisis, many of them share key problems with macroeconomics, and as such the crisis should prompt at least a degree of introspection throughout the discipline.
This is the second part of my response to criticisms of Keen’s Debunking Economics. In my previous post* I covered some of the fundamental objections Keen had to neoclassical theory. Here, I will cover Keen’s exploration of alternatives: first, a brief note on dynamics and chaos theory; then a discussion of Keen’s own models; finally, his dismissal of the Marxist Labour Theory of Value (LTV).
Dynamics and Equilibrium
Many economists have argued that Keen’s contention that economists do not study dynamics is false. I agree. Keen does not really address the DSGE conception of equilibrium, which is highly different to the typical conception of a steady state. An equilibrium in an economic model occurs when all agents have specific preferences, endowments etc. and take the course of action which suits them best based on this. This can be subject to incomplete information, risk aversion or various other ‘frictions.’ These agents intermittently interact in market exchanges, during which all markets clear. Basically, ‘solving for equilibrium’ means you specify the actions and characteristics of economic agents, then see what happens when markets clear. It’s entirely possible that the subsequent model could exhibit chaotic behaviour.**
Now, there are obviously many problems here. The fact is that the overwhelming majority of people who learn economics will not touch this. They will instead be faced with static-style equilibrium models, which they have been told are unrealistic but ‘elucidate certain principals.’ This is nonsense – they elucidate nothing, and simply need to be thrown out. Nevertheless, many policymakers, regulators and business economists are working under this framework. Furthermore, even those economists who have gone beyond this level seem to have the concepts deeply ingrained into their minds, and regard them as useful.
However, even the more advanced ‘dynamic’ equilibrium clearly has problems. First, the presence of irreducible uncertainty – which, as far as I can see, is a concept entirely misused by economists – means that it is virtually certain not all expectations will be fulfilled, while equilibrium assumes they will be. Second, ‘fulfilled expectations’ is far stronger than economists seem to think – for example, it eliminates the possibility of default! Third, the assumption that all markets clear is obviously false, otherwise supermarkets wouldn’t throw out old food. Anyway, I digress: Keen could easily address all of these criticisms, but for some reason he doesn’t. This is indeed a shortcoming of his book.
First, a brief note on Keen’s model of firm behaviour: it seems to make the error of maximising the growth rate of profits, rather than profits themselves. I am not sure if this has been fixed. Nevertheless, I regard it as subsidiary to Keen’s main criticisms. His most important model is the Minsky Model of banking and the macroeconomy.
Keen recently had a debate over his Minsky Model with the Cambridge economist Pontus Rendahl. Andrew Lainton has a post on this, along with a contentious discussion with Rendahl, over on his blog. In my opinion, Rendahl – though overly dismissive in tone, and not causing as many problems for Keen as he seemed to think – highlighted a number of issues with Keen’s model in its current form:
(1) Say’s Law holds. In Keen’s model, income is simply a function of the capital stock, and there is no role for demand.
(2) In what was generally a model set in continuous time, which used ODEs, there is an equation which uses discrete time intervals. Such equations cannot be solved in the same way, so Keen’s methodology is inconsistent.
(3) There is, as of yet, no role for expectations in Keen’s model.
(4) Rendahl argues that DSGE models are also Stock Flow Consistent (SFC). I think he is correct – see, for example, his own paper, which has agents accumulating stocks of money from previous periods. The major differences between SFC and DSGE appear to be: a lack of micro foundations; continuous functions; use of classes; market clearing; fulfilled expectations; and, of course, with Keen’s, the role of banks and private debt.
In terms of assumptions, I’d say Keen’s model is in the ‘heuristic’ stage – it’s not completely right and needs development. The criticisms are essentially things that have not yet been added to the model, rather than conceptual or logical problems (save the inconsistent equation). This means they can be added as it develops. However, if the model makes good predictions, it may prove to be useful, even though that should never serve as a barrier to making it more realistic and comprehensive.
Labour Theory of Value
If neoclassical economists want a lesson in how to respond to a critique you strongly disagree with without being vitriolic and dismissive, then they need look no further than the marxist responses to Keen’s critique of the LTV. This is all the more ironic given said economist’s willingness to dismiss marxists as illogical and dogmatic.
Keen’s critique is threefold, so I will discuss it briefly, followed by the marxist responses.
The first critique is Bose’s commodity residue. The idea is that no matter how far you go back in time, disaggregating a commodity into what was required to produce it, there will always be a commodity residue left over. Hence, no commodity can be reduced to merely labour-power. The problem here is the projection of capitalism into all of history. For Marx, a commodity only resulted from capitalist production. However, if you go back in time you will find non-capitalist production, and eventually you will be able to reduce everything into land/natural resources and labour, which Marx never defined as commodities. Having said this, one question remains: can the natural resources or land not be a source of surplus value? Could this surplus value not have been transferred into capitalist commodities?
Second is Ian Steedman’s Sraffian interpretation of Marx. Simply put, it seems Steedman had his interpretation wrong – Marx’s is not a physical, equilibrium system based on determining factor prices. This is something that actually struck me on the first read of Keen’s LTV chapter: Steedman simply converts Marx into Sraffian form without much justification. If Marx did not intend this to be the case, the criticism is defunct from the outset.Hence, it follows that Steedman’s model is simply a misinterpretation of Marx, and it is not even necessary to go into the maths. There is, of course, a possibility that this is an overly superficial interpretation and I am mistaken.
The third criticism is that Marx’s treatment of use-value and exchange-value is inconsistent: properly applied, it implies that a commodity’s use-value can exceed its exchange value, and hence be a source of surplus value. Now, I remain unsure of this area so I might be wrong in my exposition, but here is my attempt to explain the Marxist response: (warning: the following paragraph will contain a vast overuse of the word ‘value’ in what is already a necessarily convoluted explanation).
Marxists contend that Keen’s is a misinterpretation of use-value, which is simply a binary concept and not quantifiable. Something may have any number of uses which give it a use-value, which is a necessary condition for it to have an exchange-value. However, the exchange-value cannot ‘exceed’ the use-value, because the use-value cannot be measured. It is in this sense that labour is unique in Marx’s conception of capitalism: its specific use-value is the production of surplus for capitalists. It is the only ‘factor of production’ that can do this – after all, capital ultimately reduces to past labour value. If production could take place without labour, prices would fall to zero and, while Marx would be refuted, nobody would care because the problem of economic scarcity would vanish. Hence, surplus production and profits depend on labour producing more than it is rewarded.
I remain neither convinced of the LTV, nor of its critics.*** For me, most discussion of the LTV appears to rest on the LTV as a premise. The debate is split into people who accept the LTV and people who not only reject it, but see no need for it. For this reason, critics seem to misrepresent and misinterpret it continually – a common theme is to try and abstract from historical circumstance, when it’s clear Marx emphasised that his analysis only applied under capitalism, which he saw as a particular social relation. For me, the main issue remains the same as it is for other theories: what is the falsification criteria for the LTV?
Overall, a couple of points stand out for post-Keynesians for their own theories, both of value and economic systems. The first is that DSGE models are probably not that different to some heterodox models, and identifying the actual differences is crucial to opening up a dialogue between mainstream and heterodox economists.
The second is that I would caution left-leaning economists not to be too hasty to dismiss Marxism as dogmatic (in my experience marxists are anything but), or avoid it simply out of fear of being dismissed themselves. In my opinion, the LTV – while not entirely convincing – is a cut above the neoclassical ‘utility’ conception of value, and I’d sooner be equipped with Marxist explanations of a crisis when trying to understand capitalism. This isn’t to say post-Keynesians haven’t thought about Marx; moreso that the issue is often approached with a degree of bias. At the very least, the distinction between use-value and exchange-value is something that befits post-Keynesian analysis well.
So, as far as theory goes, this is the last post on Keen’s book. I will, however, do some closing notes from a more general perspective. As I said before, if there are any other criticisms of Keen that I have not covered, feel free to discuss them in the comments.
*It is worth noting that in my previous post I was somewhat – thought not totally – off the mark with my discussion of Keen on demand curves. The Gorman conditions for the existence of a representative agent do indeed have many similarities to the SMD theorem and conceptually they are dealing with the same issue: aggregation of preferences. Nevertheless, Keen weaves between the two, when it would have been more accurate to note economists have used two (main) different methods to get around the problem, and critiqued them separately. Similarly, though Keen’s quote from MWG was incorrect, it is true that economists such as Samuelson have used the assumption of a dictator to aggregate preferences. However, the specific one Keen presented was not right.
**However, that does not make it the same as chaos theory.
***For me, claims that worker ownership of production would be desirable don’t really rest on the LTV; instead, the simple point is that workers could employ capital themselves.
In chapter 14 of Debunking Economics, Steve Keen walks us through the macroeconomic model he has developed in recent years, and discusses the implications its conclusions might have for policy. The model is in its early stages, and Keen himself says there are “many aspects of the model of which [he is] critical.” Nonetheless, it is a promising start to developing an alternative to the dominant DSGE method.
Keen’s is a model of a pure credit economy, with three aggregated agents: workers, firms, and bankers. Instead of focusing on preferences, individuals and market clearing, it focuses on the flow of funds between different sectors. Bankers create their own money in the form of loans*, which at this stage they are only allowed to lend to firms. The firms pay the workers and the interest on the loans, whilst the bankers and workers consume the output of the firms.
The crucial sector here, is, of course, banking, which few neoclassical models include explicitly, as they believe finance plays the role of intermediation between savers and borrowers. However, in Keen’s model the banks are central. He disaggregates them into several accounts: a vault in which to store notes; a safe into which interest is paid and out of which bankers are paid; a loan ledger; firm deposits and worker deposits. The flows between the various agents and accounts are then determined by some arbitrary coefficients, which Keen uses simply to determine whether the model will ‘work’ (i.e. not break down). Each flow (e.g. wages; consumption) is determined by a constant times a stock (e.g. firm’s deposits; worker’s deposits).
This is the point at which economists might scream ‘Lucas Critique,’ but Keen’s comment from an earlier chapter, that is absurd to suggest that any change in policy will have the effect of neutralising arbitrary parameters, applies. Furthermore, there is no theory that is policy-independent, so whilst we must examine the relationship between policy and reality, we cannot render our models immune to it by micro founding them. In any case, the model is in its early stages, and there is plenty of room for adding complexity.
Keen uses this basic model to explore what effect bank bailouts will have. Quelle surprise, bank bailouts have the effect of increasing loans slightly, and benefitting bankers, but don’t do much for the real economy. Conversely, bailing out firms and workers creates a better result for everyone except…the bankers! A small data point can be found in support of this in Australia, where the government bailed out everyone over 18 with $1000, and the economy has performed better than those where the banks have been bailed out. Obviously there are a multitude of conflicting factors, but Keen’s hypothesis does not seem at all unreasonable when taking recent events as whole.
The interesting thing about Keen’s model is that a ‘Great Moderation’ and a ‘Great Recession’ are simply two parts of the same debt-driven process, rather than a ‘black swan’ or some other such event. Debt to GDP rises exponentially in a period of relative tranquility, and this is followed by a huge crash and mass unemployment. A substantial part of this difference from the core DSGE models is created simply by adding banks as explicit agents.
Keen has since developed his model further – he has included ‘Ponzi’ lending, sticky wages/price (which actually stabilise the economy) and a variety of other factors. There is plenty of scope for adding more to the model, such as an exogenously set interest rate with a central bank, but for now the core alone seems to be able to generate behaviour that closely resembles that of a capitalist economy: one prone to cyclical breakdown and intermittent financial crises, not due to any particular ‘friction,’ but due to the inherent characteristics of the system. This should be enough to get any empirically driven economists to pay attention, whether they agree or disagree with the mechanics of the model.