Posts Tagged Stock-Flow Consistent Models

How Not to Do Macroeconomics

A frustrating recurrence for critics of ‘mainstream’ economics is the assertion that they are criticising the economics of bygone days: that those phenomena which they assert economists do not consider are, in fact, at the forefront of economics research, and that the critics’ ignorance demonstrates that they are out of touch with modern economics – and therefore not fit to criticise it at all.

Nowhere is this more apparent than with macroeconomics. Macroeconomists are commonly accused of failing to incorporate dynamics in the financial sector such as debt, bubbles and even banks themselves, but while this was true pre-crisis, many contemporary macroeconomic models do attempt to include such things. Reputed economist Thomas Sargent charged that such criticisms “reflect either woeful ignorance or intentional disregard for what much of modern macroeconomics is about and what it has accomplished.” So what has it accomplished? One attempt to model the ongoing crisis using modern macro is this recent paper by Gauti Eggertsson & Neil Mehrotra, which tries to understand secular stagnation within a typical ‘overlapping generations’ framework. It’s quite a simple model, deliberately so, but it helps to illustrate the troubles faced by contemporary macroeconomics.

The model

The model has only 3 types of agents: young, middle-aged and old. The young borrow from the middle, who receive an income, some of which they save for old age. Predictably, the model employs all the standard techniques that heterodox economists love to hate, such as utility maximisation and perfect foresight. However, the interesting mechanics here are not in these; instead, what concerns me is the way ‘secular stagnation’ itself is introduced. In the model, the limit to how much young agents are allowed to borrow is exogenously imposed, and deleveraging/a financial crisis begins when this amount falls for unspecified reasons. In other words, in order to analyse deleveraging, Eggertson & Mehrotra simply assume that it happens, without asking why. As David Beckworth noted on twitter, this is simply assuming what you want to prove. (They go on to show similar effects can occur due to a fall in population growth or an increase in inequality, but again, these changes are modelled as exogenous).

It gets worse. Recall that the idea of secular stagnation is, at heart, a story about how over the last few decades we have not been able to create enough demand with ‘real’ investment, and have subsequently relied on speculative bubbles to push demand to an acceptable level. This was certainly the angle from which Larry Summers and subsequent commentators approached the issue. It’s therefore surprising – ridiculous, in fact – that this model of secular stagnation doesn’t include banks, and has only one financial instrument: a risk-less bond that agents use to transfer wealth between generations. What’s more, as the authors state, “no aggregate savings is possible (i.e. there is no capital)”. Yes, you read that right. How on earth can our model understand why there is not enough ‘traditional’ investment (i.e. capital formation), and why we need bubbles to fill that gap, if we can have neither investment nor bubbles?

Naturally, none of these shortcomings stop Eggertson & Mehrotra from proceeding, and ending the paper in economists’ favourite way…policy prescriptions! Yes, despite the fact that this model is not only unrealistic but quite clearly unfit for purpose on its own terms, and despite the fact that it has yielded no falsifiable predictions (?), the authors go on give policy advice about redistribution, monetary and fiscal policy. Considering this paper is incomprehensible to most of the public, one is forced to wonder to whom this policy advice is accountable. Note that I am not implying policymakers are puppets on the strings of macroeconomists, but things like this definitely contribute to debate – after all, secular stagnation was referenced by the Chancellor in UK parliament (though admittedly he did reject it). Furthermore, when you have economists with a platform like Paul Krugman endorsing the model, it’s hard to argue that it couldn’t have at least some degree of influence on policy-makers.

Now, I don’t want to make general comments solely on the basis of this paper: after all, the authors themselves admit it is only a starting point. However, some of the problems I’ve highlighted here are not uncommon in macro: a small number of agents on whom some rather arbitrary assumptions are imposed to create loosely realistic mechanics, an unexplained ‘shock’ used to create a crisis. This is true of the earlier, similar paper by Eggertson & Krugman, which tries to model debt-deflation using two types of agents: ‘patient’ agents, who save, and ‘impatient agents’, who borrow. Once more, deleveraging begins when the exogenously imposed constraint on the patient agent’s borrowing falls For Some Reason, and differences in the agents’ respective consumption levels reduce aggregate demand as the debt is paid back. Again, there are no banks, no investment and no real financial sector. Similarly, even the far more sophisticated Markus K. Brunnermeier & Yuliy Sannikov – which actually includes investment and a financial sector – still only has two agents, and relies on exogenous shocks to drive the economy away from its steady-state.

Whither macroeconomics?

Why do so many models seem to share these characteristics? Well, perhaps thanks to the Lucas Critique, macroeconomic models must be built up from optimising agents. Since modelling human behaviour is inconceivably complex, mathematical tractability forces economists to make important parameters exogenous, and to limit the number (or number of types) of agents in the model, as well as these agents’ goals & motivations. Complicated utility functions which allow for fairly common properties like relative status effects, or different levels of risk aversion at different incomes, may be possible to explore in isolation, but they’re not generalisable to every case or the models become impossible to solve/indeterminate. The result is that a model which tries to explore something like secular stagnation can end up being highly stylised, to the point of missing the most important mechanics altogether. It will also be unable to incorporate other well-known developments from elsewhere in the field.

This is why I’d prefer something like Stock-Flow Consistent models, which focus on accounting relations and flows of funds, to be the norm in macroeconomics. As economists know all too well, all models abstract from some things, and when we are talking about big, systemic problems, it’s not particularly important whether Maria’s level of consumption is satisfying a utility function. What’s important is how money and resources move around: where they come from, and how they are split – on aggregate – between investment, consumption, financial speculation and so forth. This type of methodology can help understand how the financial sector might create bubbles; or why deficits grow and shrink; or how government expenditure impacts investment. What’s more, it will help us understand all of these aspects of the economy at the same time. We will not have an overwhelming number of models, each highlighting one particular mechanic, with no ex ante way of selecting between them, but one or a small number of generalisable models which can account for a large number of important phenomena.

Finally, to return to the opening paragraph, this paper may help to illustrate a lesson for both economists and their critics. The problem is not that economists are not aware of or never try to model issue x, y or z. Instead, it’s that when they do consider x, y or z, they do so in an inappropriate way, shoehorning problems into a reductionist, marginalist framework, and likely making some of the most important working parts exogenous. For example, while critics might charge that economists ignore mark-up pricing, the real problem is that when economists do include mark-up pricing, the mark-up is over marginal rather than average cost, which is not what firms actually do. While critics might charge that economists pay insufficient attention to institutions, a more accurate critique is that when economists include institutions, they are generally considered as exogenous costs or constraints, without any two-way interaction between agents and institutions. While it’s unfair to say economists have not done work that relaxes rational expectations, the way they do so still leaves agents pretty damn rational by most peoples’ standards. And so on.

However, the specific examples are not important. It seems increasingly clear that economists’ methodology, while it is at least superficially capable of including everything from behavioural economics to culture to finance, severely limits their ability to engage with certain types of questions. If you want to understand the impact of a small labour market reform, or how auctions work, or design a new market, existing economic theory (and econometrics) is the place to go. On the other hand, if you want to understand development, historical analysis has a lot more to offer than abstract theory. If you want to understand how firms work, you’re better off with survey evidence and case studies (in fairness, economists themselves have been moving some way in this direction with Industrial Organisation, although if you ask me oligopoly theory has many of the same problems as macro) than marginalism. And if you want to understand macroeconomics and finance, you have to abandon the obsession with individual agents and zoom out to look at the bigger picture. Otherwise you’ll just end up with an extremely narrow model that proves little except its own existence.

 

, , , ,

25 Comments

Whig Theories of the History of Thought

Yesterday, Paul Krugman had a post in which he pontificated on the Stock Flow Consistent (SFC) models of the economy used by Wynne Godley (and generally by post-Keynesians). These models focus on simple flows of funds between sectors, use endogenous money mechanics and are consistent with accounting identities, making them highly suitable for modelling financial crises. Krugman, however, is not impressed. He seems to see these simple mechanics – with relationships between sectors determined by coefficients – as not quite ‘rigorous’ in the same way as the type of optimising, equilibrium model he favours. He believes that “Hydraulic” approaches like Godley’s have been supplanted by superior models, and so it would be a step backward to take another look at them.

It is worth noting that Krugman’s dismissal of these “hydraulic” models is strange given his zealous promotion of IS/LM, which is surely a hydraulic model if there ever was one: variables are simply determined by coefficients and left largely unexplained in terms of typical ‘optimising’ behaviour. More importantly, though, I think Krugman has quite an ignorant conception of how economic theory has developed. He tries to paint picture of economics as continually discovering new and interesting insights, but the reality is far more complex. In fact, many of the ‘insights’ mainstream economics claims to have discovered were already known; what’s more, their solutions to the purported problems with “hydraulic” models leave a lot to be desired. Often the only problems with a theory resulted from misinterpretations of particular thinkers, and, properly interpreted, the model offered a credible alternative to the neoclassical approach. Overall the history of economic thought is not really a clear cut story of scientific progression.

Yet Krugman sees things through a ‘Whig‘ perception of the history (of thought), where everything has progressed over time and culminated naturally in what we have now. He thinks that macroeconomics in the 1950s and 60s was similar to Godley’s work, but that it was abandoned for good reasons:

What you might not realize from this passage is that Godley’s notion that we should represent behavior by rules of thumb isn’t something new — it’s something old, which got driven out of macroeconomics. The “hydraulic Keynesianism” of the 1950s was all about viewing the economy as a kind of mechanism in which consumer behavior could be represented by an ad hoc consumption function, investment behavior by an ad hoc investment function, and so on. This produced a more or less mechanistic view of the economy, and AW Phillips famously represented hydraulic macro with a literal hydraulic mechanism.

I’m glad Krugman knows about Phillips’ wonderful MONIAC model. However, economists really misinterpreted Phillips, as this is clear in Krugman’s discussion of his work. His ‘curve‘ – which Krugman goes on to reference – was supposed to be a dynamic model of how unemployment and inflation change over the business cycle, not a static trade off between the two. Furthermore, Phillips’ models also included expectations, one of the supposed strengths of the models that displaced his. What’s more, Phillips was well aware of the problem of how the economy may evolve and change over time or with policy – as he put it:

In my view it cannot be too strongly stated that in attempting to control economic fluctuations we do not have two separate problems of estimating the system and controlling it, we have a single problem of jointly controlling and learning about the system, that is, a problem of learning control or adaptive control.

Krugman doesn’t reference this point – known by the mainstream as the Lucas Critique – explicitly, but this is the major reason “hydraulic” models were abandoned in favour of the ‘microfounded’ models Krugman endorses: it was thought that the latter would not be as susceptible to change with policy. However, this insight had been stated by many long before Lucas – not only Phillips above, but also Keynes – and it is a continual problem, so we cannot ‘immunise’ ourselves from it with microfoundations or anything else. The heterodox economists Krugman dismisses so blithely were actually more alert to the problem than he was, because they didn’t think they had – or could – supersede it.

Krugman now discusses why his favoured ‘optimising’ approach took centre stage in the 1970s and 80s:

So why did hydraulic macro get driven out? Partly because economists like to think of agents as maximizers — it’s at the core of what we’re supposed to know — so that other things equal, an analysis in terms of rational behavior always trumps rules of thumb.

I’m not sure this even counts as a defense of economist’s approach, because it is hopelessly question-begging. Economists shouldn’t model a certain way “because [they] like to”; they should do it because it is more consistent with observed phenomenon. Rational behaviour doesn’t really “trump” anything, as it is largely unobserved in the real world, whether on the part of firms, consumers, governments or what have you. All this point actually demonstrates is how economists can be predisposed toward a certain framework, regardless of predictive success or failure.

However, Krugman thinks that the neoclassical approach has been largely a predictive success when compared with its “hydraulic” counterpart, and puts forward two major points to show this:

First involved consumption spending. Conventional Keynesian consumption functions suggested that the savings rate would rise as incomes rose — and this wasn’t just the Keynesian interpreters, Keynes himself made the same claim….In fact, however, savings rates don’t seem to follow the naive consumption function at all; they rise in booms, and are higher for the wealthy, but exhibit no secular trend. And Milton Friedman appeared to explain this paradox by arguing that people are more less rational: they base consumption on “permanent income”, a reasonable estimate of long-run income, and save temporary fluctuations in income.

First, as Ramanan quickly noted, Wynne Godley’s model did not suggest the savings rate would rise as incomes rose; in fact, it was the opposite. So his model was perfectly consistent with observed behaviour, and there was no need to invoke optimising agents to ‘explain’ anything.

Second, if we were looking for a model of consumption based on human behaviour, Friedman’s permanent income hypothesis – where people spent money as a function of their total lifetime, rather than current income – was not the first or best theory for the job. In fact, it actually displaced a better theory, the relative income hypothesis. This suggested that people’s expenditure was a function of what people around them were spending, or the norms in a society. Poor people’s consumption would be a higher percentage of their income because they were trying to “keep up with the Joneses”; however, as incomes as a whole rose, so would the socially acceptable level of income, so savings rates would not rise with total income. This kind of airy-fairy explanation is like nails on a blackboard for many mainstream economists, but it is more consistent with both the statistical evidence and observed human behaviour.

Krugman’s second point in favour of neoclassical economics is the role of the 1970s stagflation as a vindication of the ‘rational agent’ approach:

In came Friedman and Phelps to argue that rational price-setters would build expected inflation into their choices, so that sustained low unemployment would produce accelerating inflation. And the stagflation of the 70s seemed to vindicate their argument.

Friedman and Phelps’ model may “seem” consistent with stagflation, but there are alternative explanations for stagflation, too, and the question is which one is most closely consistent with the mechanics of the phenomenon. In fact, Friedman and Phelps’ NAIRU theory’s major prediction – that past a certain level of unemployment, inflation will start to increase dramatically – actually has little evidence behind it. On the other hand, Andrew Lainton has pointed out that in Wynne Godley’s Magnum Opus, Monetary Economics, he developed a full model of stagflation that dealt with the 1970s quite competently. On top this this, Godley’s model, as well as similar ones like Steve Keen’s, are also well equipped to explain the recent financial crisis. As Noah Smith once argued, the best theories are those which can explain all phenomena within their domain, and models like Godley’s simply fit this description more closely. But Krugman doesn’t realise this, because he knows very little about the work of Godley and people like him – after all, why bother when they’ve been relegated to the dustbin of progress?

The only problems with economist’s 1950s view of inflation and unemployment stemmed from the ‘bastard Keynesianism’ of the 1950s and 60s, which resulted from misinterpreting Keynes and Phillips and trying to shoehorn them into the neoclassical approach. However, even if we accept ‘bastard Keynesianism’, Krugman’s point is questionable, as the prominent post-war economists Robert Solow and Paul Samuleson were also aware of the potential for the relationship between inflation and unemployment to become unstable. Therefore, claiming that Friedman and Phelps swept in with new, largely successful insights is a stretch to say the least.

Essentially, Krugman believes in a Whiggish conception of the history of thought, where good ideas have driven out the bad, and economics has slowly made better and better predictions. But like all Whig history, Krugman’s opinion rests on arbitrarily placing current theory as the inevitable goal of complex and fractured processes, ignorance of things that don’t seem immediately relevant to these theories, and above all, a good old bit of self-aggrandisation. While I don’t want to tar too many with this brush, judging from the way I was taught, and what I’ve seen elsewhere, it seems that a large part of the discipline thinks this way. But the development of economics has been far more complex, and alternative theories are far more credible, than such a narrative would have you believe.

, , , ,

28 Comments

Debunking Economics, Part XVIII: Response to Criticisms (2/2)

This is the second part of my response to criticisms of Keen’s Debunking Economics. In my previous post* I covered some of the fundamental objections Keen had to neoclassical theory. Here, I will cover Keen’s exploration of alternatives: first, a brief note on dynamics and chaos theory; then a discussion of Keen’s own models; finally, his dismissal of the Marxist Labour Theory of Value (LTV).

Dynamics and Equilibrium

Many economists have argued that Keen’s contention that economists do not study dynamics is false. I agree. Keen does not really address the DSGE conception of equilibrium, which is highly different to the typical conception of a steady state. An equilibrium in an economic model occurs when all agents have specific preferences, endowments etc. and take the course of action which suits them best based on this. This can be subject to incomplete information, risk aversion or various other ‘frictions.’ These agents intermittently interact in market exchanges, during which all markets clear. Basically, ‘solving for equilibrium’ means you specify the actions and characteristics of economic agents, then see what happens when markets clear. It’s entirely possible that the subsequent model could exhibit chaotic behaviour.**

Now, there are obviously many problems here. The fact is that the overwhelming majority of people who learn economics will not touch this. They will instead be faced with static-style equilibrium models, which they have been told are unrealistic but ‘elucidate certain principals.’ This is nonsense – they elucidate nothing, and simply need to be thrown out. Nevertheless, many policymakers, regulators and business economists are working under this framework. Furthermore, even those economists who have gone beyond this level seem to have the concepts deeply ingrained into their minds, and regard them as useful.

However, even the more advanced ‘dynamic’ equilibrium clearly has problems. First, the presence of irreducible uncertainty – which, as far as I can see, is a concept entirely misused by economists – means that it is virtually certain not all expectations will be fulfilled, while equilibrium assumes they will be. Second, ‘fulfilled expectations’ is far stronger than economists seem to think – for example, it eliminates the possibility of default! Third, the assumption that all markets clear is obviously false, otherwise supermarkets wouldn’t throw out old food. Anyway, I digress: Keen could easily address all of these criticisms, but for some reason he doesn’t. This is indeed a shortcoming of his book.

Keen’s Models

First, a brief note on Keen’s model of firm behaviour: it seems to make the error of maximising the growth rate of profits, rather than profits themselves. I am not sure if this has been fixed. Nevertheless, I regard it as subsidiary to Keen’s main criticisms. His most important model is the Minsky Model of banking and the macroeconomy.

Keen recently had a debate over his Minsky Model with the Cambridge economist Pontus Rendahl. Andrew Lainton has a  post on this, along with a contentious discussion with Rendahl, over on his blog. In my opinion, Rendahl – though overly dismissive in tone, and not causing as many problems for Keen as he seemed to think – highlighted a number of issues with Keen’s model in its current form:

(1) Say’s Law holds. In Keen’s model, income is simply a function of the capital stock, and there is no role for demand.

(2) In what was generally a model set in continuous time, which used ODEs, there is an equation which uses discrete time intervals. Such equations cannot be solved in the same way, so Keen’s methodology is inconsistent.

(3) There is, as of yet, no role for expectations in Keen’s model.

(4) Rendahl argues that DSGE models are also Stock Flow Consistent (SFC). I think he is correct – see, for example, his own paper, which has agents accumulating stocks of money from previous periods. The major differences between SFC and DSGE appear to be: a lack of micro foundations; continuous functions; use of classes; market clearing; fulfilled expectations; and, of course, with Keen’s, the role of banks and private debt.

In terms of assumptions, I’d say Keen’s model is in the ‘heuristic’ stage – it’s not  completely right and needs development. The criticisms are essentially things that have not yet been added to the model, rather than conceptual or logical problems (save the inconsistent equation). This means they can be added as it develops. However, if the model makes good predictions, it may prove to be useful, even though that should never serve as a barrier to making it more realistic and comprehensive.

Labour Theory of Value

If neoclassical economists want a lesson in how to respond to a critique you strongly disagree with without being vitriolic and dismissive, then they need look no further than the marxist responses to Keen’s critique of the LTV. This is all the more ironic given said economist’s willingness to dismiss marxists as illogical and dogmatic.

Keen’s critique is threefold, so I will discuss it briefly, followed by the marxist responses.

The first critique  is Bose’s commodity residue. The idea is that no matter how far you go back in time, disaggregating a commodity into what was required to produce it, there will always be a commodity residue left over. Hence, no commodity can be reduced to merely labour-power. The problem here is the projection of capitalism into all of history. For Marx, a commodity only resulted from capitalist production. However, if you go back in time you will find non-capitalist production, and eventually you will be able to reduce everything into land/natural resources and labour, which Marx never defined as commodities. Having said this, one question remains: can the natural resources or land not be a source of surplus value? Could this surplus value not have been transferred into capitalist commodities?

Second is Ian Steedman’s Sraffian interpretation of Marx. Simply put, it seems Steedman had his interpretation wrong – Marx’s is not a physical, equilibrium system based on determining factor prices. This is something that actually struck me on the first read of Keen’s LTV chapter: Steedman simply converts Marx into Sraffian form without much justification. If Marx did not intend this to be the case, the criticism is defunct from the outset.Hence, it follows that Steedman’s model is simply a misinterpretation of Marx, and it is not even necessary to go into the maths. There is, of course, a possibility that this is an overly superficial interpretation and I am mistaken.

The third criticism is that Marx’s treatment of use-value and exchange-value is inconsistent: properly applied, it implies that a commodity’s use-value can exceed its exchange value, and hence be a source of surplus value. Now, I remain unsure of this area so I might be wrong in my exposition, but here is my attempt to explain the Marxist response: (warning: the following paragraph will contain a vast overuse of the word ‘value’ in what is already a necessarily convoluted explanation).

Marxists contend that Keen’s is a misinterpretation of use-value, which is simply a binary concept and not quantifiable. Something may have any number of uses which give it a use-value, which is a necessary condition for it to have an exchange-value. However, the exchange-value cannot ‘exceed’ the use-value, because the use-value cannot be measured. It is in this sense that labour is unique in Marx’s conception of capitalism: its specific use-value is the production of surplus for capitalists. It is the only ‘factor of production’ that can do this – after all, capital ultimately reduces to past labour value. If production could take place without labour, prices would fall to zero and, while Marx would be refuted, nobody would care because the problem of economic scarcity would vanish. Hence, surplus production and profits depend on labour producing more than it is rewarded.

I remain neither convinced of the LTV, nor of its critics.*** For me, most discussion of the LTV appears to rest on the LTV as a premise. The debate is split into people who accept the LTV and people who not only reject it, but see no need for it. For this reason, critics seem to misrepresent and misinterpret it continually – a common theme is to try and abstract from historical circumstance, when it’s  clear Marx emphasised that his analysis only applied under capitalism, which he saw as a particular social relation. For me, the main issue remains the same as it is for other theories: what is the falsification criteria for the LTV?

Overall, a couple of points stand out for post-Keynesians for their own theories, both of value and economic systems. The first is that DSGE models are probably not that different to some heterodox models, and identifying the actual differences is crucial to opening up a dialogue between mainstream and heterodox economists.

The second is that I would caution left-leaning economists not to be too hasty to dismiss Marxism as dogmatic (in my experience marxists are anything but), or avoid it simply out of fear of being dismissed themselves. In my opinion, the LTV – while not entirely convincing – is a cut above the neoclassical ‘utility’ conception of value, and I’d sooner be equipped with Marxist explanations of a crisis when trying to understand capitalism. This isn’t to say post-Keynesians haven’t thought about Marx; moreso that the issue is often approached with a degree of bias. At the very least, the distinction between use-value and exchange-value is something that befits post-Keynesian analysis well.

So, as far as theory goes, this is the last post on Keen’s book. I will, however, do some closing notes from a more general perspective. As I said before, if there are any other criticisms of Keen that I have not covered, feel free to discuss them in the comments.

*It is worth noting that in my previous post I was somewhat – thought not totally – off the mark with my discussion of Keen on demand curves. The Gorman conditions for the existence of a representative agent do indeed have many similarities to the SMD theorem and conceptually they are dealing with the same issue: aggregation of preferences. Nevertheless, Keen weaves between the two, when it would have been more accurate to note economists have used two (main) different methods to get around the problem, and critiqued them separately. Similarly, though Keen’s quote from MWG was incorrect, it is true that economists such as Samuelson have used the assumption of a dictator to aggregate preferences. However, the specific one Keen presented was not right.

**However, that does not make it the same as chaos theory.

***For me, claims that worker ownership of production would be desirable don’t really rest on the LTV; instead, the simple point is that workers could employ capital themselves.

, , , , , , ,

101 Comments

Follow

Get every new post delivered to your Inbox.

Join 1,040 other followers