DSGE, Epicycles and Neoclassical Methodology

Daniel Kuehn asks me to define the difference between incorporating something into a model and ‘adding epicycles‘, as Keen and others so often put it.

As it happens, an essay by Christian Arnsperger & Yanis Varoufakis may provide us with the answer. In this essay, Arnsperger and Varoufakis attempt to define neoclassical methodology, hoping to nullify its lizard-like ability to dispose of certain parts in order to evade criticism. Personally, I think they hit the nail on the head.

They provide three axioms which define neoclassical methodology:

(1) Methodological individualism – the economy is modeled on the basis of the behaviour of individual agents.

(2) Methodological instrumentalism – individuals act in accordance with certain preferences rankings, to attain some end goal that they deem desirable.

(3) Methodological equilibration – given the above two, macroeconomics asks what will happen if we assume equilibrium. Note that this doesn’t  necessarily posit that the system will end up in equilibrium (although that is often the case), but rather seeks to find out what will happen if we use equilibrium as an epistemological starting point.

I will not criticise the axioms here, but suffice to say that this gets to the crux of what the arguments have been about. This methodological core underlies everything from demand-supply to game theory to DSGE.

Much like the assumption of circular orbit, the methodological core of neoclassicism is at all times protected as it develops. Most neoclassical economists don’t think twice about the axioms, and this helps them deny that they are, in fact, ‘neoclassical’, seeing it only as a buzz word used by their enemies.

In fact, neoclassical economics has a habit of preserving not only these three axioms, but also many other assumptions it introduces. For example, take the case of Krugman and Eggertson versus Keen. Keen models the banks as explicit agents and creators of purchasing power, whilst Krugman and Eggertson preserve the ‘banks as intermediaries between savers and borrowers’ line, abstracting them out the economy, and ad-hocing a role for private debt.

You can also see these axioms in criticisms of Keen’s models. Krugman says that there is ‘a lot of implicit theorising’ going on in Keen’s paper. Perhaps this is true and maybe Keen needs to clarify his epistemology, but what Krugman really means – unknowingly, perhaps – is that Keen doesn’t start from the three axioms: he isn’t looking at individual behaviour, instead at the flow of money between agents; nobody is acting in accordance with attaining certain preferences; equilibrium is not used as a starting point. From my experience, I strongly suspect that most mainstream economists feel a similar skepticism when reading Keen’s paper.

I believe that in order for the debate to move forward, these 3 axioms – and others that are protected by the ad hoc style of DSGE – must be focused on and criticised. Otherwise critics will never land a convincing blow, and will be forever accused of straw manning.

* As a note, Austrians, this is why I link you with neoclassicism. The first two certainly define all of Austrian economics, and, at least in the case of Hayek, you also use equilibrium as an epistemological starting point.

About these ads

, , , , , ,

  1. #1 by Oliver on April 6, 2012 - 5:28 pm

    The specific fallacy is Smith’s claim that the pursuit of self-interest, which has to be balanced against regard for others in other human interactions, can be trusted to lead to good outcomes both for oneself and others in the context of competitive market interactions.

    From an interview with Duncan Foley about his book Adam’s Fallacy

    The interview:

    http://radicalnotes.com/content/view/33/30/

    Another good book that takes on marginalism with its own logic is The Persistence of Poverty
    Why the Economics of the Well-Off Can’t Help the Poor
    by Charles Karelis

    review: http://rortybomb.wordpress.com/2009/12/03/persistence-of-poverty-and-increasing-marginal-utility/

    • #2 by Unlearningecon on April 6, 2012 - 7:25 pm

      I have read Adam’s Fallacy and consider it a great book with a misled title. I referenced it in my first post. Gavin Kennedy appears to be the best informed Adam Smith scholar on the web.

      Going by the review, Karelis’ book looks incredibly interesting. Thanks for the HT.

      • #3 by dkuehn on April 6, 2012 - 8:03 pm

        Gavin Kennedy is very good on Smith.

  2. #4 by dkuehn on April 6, 2012 - 8:03 pm

    OK, now I’m even more confused. Sure we say banks are intermediaries (I don’t understand what’s so objectionable about that), but we also all (i.e. – neoclassicals, non-neoclassicals) think that banks create purchasing power, right?

    You base your argument on zingers like this, but as far as I can tell we all think that.

    I think of neoclassical economics as also thinking in terms of optimizing behavior. That might not always be perfectly accurate, but I think it’s a decent assumption. We work off of social conventions too, but this is all well understood by neoclassicals and easily incorporated into optimizing behavior. If calculation and information gathering is costly, for example, then optimizing over the full range of relevant costs (i.e. – including calculation and information costs) would lead people towards following social conventions.

    So even given some of the “first glance” weaknesses of optimization assumptions, I can’t think of any problems with optimization which can’t themselves be understood as the result of other optimizing behavior. If you can think of one, let me know.

    • #5 by Unlearningecon on April 6, 2012 - 10:09 pm

      I just don’t get it. You can’t simultaneously say banks are mere intermediaries who take money from savers and give it to borrowers, and are also able to create loans out of nothing. You oscillate between the two: one minute EM proponents are wrong, the next minute they are right but it doesn’t matter.

      The ‘methodological’ instrumentalism approach is not incompatible with imperfect information, wrong decisions and environmental influences. But it does ignore means, focusing only on ends. It also tends to assume fixed, coherently ordered preferences.

      As for your optimisation, you appear to be working from the assumption that everything is optimising in some way, then redefining things to fit that assumption.

      Out of curiousity, how do you square optimisation with preference reversal and peak-end evaluation?

      • #6 by Blue Aurora on April 7, 2012 - 9:25 am

        I believe I can answer half of your last question, Unlearningecon (with regard to preference reversal, that is).

        Dr. Michael Emmett Brady, who is a decision theorist, has written papers that argue that Subjective Expected Utility is a special case.

        http://www.tandfonline.com/doi/abs/10.1080/02698599408573487

        http://bjps.oxfordjournals.org/content/44/2/357.full.pdf

        http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1920578

        http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1406842

      • #7 by dkuehn on April 7, 2012 - 11:31 am

        When has anybody said they are “mere” intermediaries? When have I said EM proponents are wrong about purchasing power creation? When have I said their being right doesn’t matter – purchasing power creation by the banking system seems to be a crucial aspect of our monetary system.

        Look – you believe banks intermediate between savers and borrowers, right? I’m not sure how one could not think that. Borrowers go to banks. Savers go to banks. The comparative rates at which borrowers and savers go to banks is pretty important for bank profitability. These things seem like obvious facts, so I don’t see what the problem is.

        “But it does ignore means, focusing only on ends.”

        I don’t even know what this is supposed to mean.

        “It also tends to assume fixed, coherently ordered preferences.”

        Well it doesn’t assume fixed at all. When preferences change the economic environment changes. There are some assumptions about basic ordering of preferences when we assume transitivity – is there a reason why we should consider this damning? Even if there are flights of fancy that violate transitivity, certainly they don’t seem to be all that common in real life. Can you think of any obvious real life cases that would have consequences for the analysis?

        “As for your optimisation, you appear to be working from the assumption that everything is optimising in some way, then redefining things to fit that assumption”

        I’m not really redefining everything, I’m just saying that optimization should be consistently applied. If information processing and gathering is costly its clearly something you’d want to optimize on. Anyone who doesn’t apply optimization in that case is being inconsistent. And look, I just bought a house – I can tell you that calculation and information gathering is extremely costly, and to deal with that it’s definitely optimizing to rely on convention. But maybe you can tell me exactly what I’ve redefined. I can’t think of what I’ve redefined.

        I think Kahneman has the definitive word on that and I’ve never seen reason to disagree with it. It’s an important convention for assessing the costs of an event quickly and easily. It also may be indicative of the fact that disutility from bads grows exponentially – that would explain why we care about peaks. Our brains evolve, so cognitive processes ought to be optimizing something at some point because that’s what natural selection does too. I think its a mistake to think that optimization in the here and now justifies every cognitive wrinkle. If we’re thinking about hyperbolic discounting, for example, it probably makes a lot more sense to recognize that it was better for the optimizing behavior of our species at a time when we just tried to survive from day to day, and just accept it as part of how we are in modern times.

      • #8 by Unlearningecon on April 7, 2012 - 1:22 pm

        When has anybody said they are “mere” intermediaries? When have I said EM proponents are wrong about purchasing power creation? When have I said their being right doesn’t matter – purchasing power creation by the banking system seems to be a crucial aspect of our monetary system.

        In DSGE models banks are generally abstracted out of the economy as they are presumed to be a neutral mechanism by which savings are allocated towards borrowers. This is what I mean by ‘mere’ – they can be ignored. And I’m not sure about what you’ve said explicitly but Krugman started by saying that banks couldn’t create purchasing power out of nothing, then went on to say they could but it doesn’t matter.

        I don’t even know what this is supposed to mean.

        Neoclassical agents act in a way as to maximise some end preference – the question of how they go about this is ignored. It’s a consequentalist style view of human action.

        Well it doesn’t assume fixed at all. When preferences change the economic environment changes. There are some assumptions about basic ordering of preferences when we assume transitivity – is there a reason why we should consider this damning? Even if there are flights of fancy that violate transitivity, certainly they don’t seem to be all that common in real life. Can you think of any obvious real life cases that would have consequences for the analysis?

        I did say tends to because of course there will always be xyz paper that I am ignorant of where they attempt to introduce dynamic preferences into a neoclassical framework. Generally, preferences are exogenous and given. There are also well known problems with aggregating preferences (Arrow’s theorem, for example). This alone undermines the indvidualist instrumentalist approach when we are talking about more than one person.

        Transitivity becomes a lot trickier when you abandon instrumentalism and help to consider means: the time and effort it will take to satisfy a certain preferences, for example. I actually went to burger king today because it was closer, although I would have preferred the hot dog stand. I guess you could redefine that to say my preferences change based on location but that’s assuming a certain theory and then contorting it around reality to make it fit: it’s far more accurate to say that the end goal of preference satisfaction took a back seat to other constraints and considerations.

  3. #9 by Tyler DiPietrantonio on April 15, 2012 - 3:10 am

    Modelling human behavior on optimization is foolish, because you’re implicitly assuming that the problem in question is efficiently computable. In particular, Walras equilibria are known be NP-hard (part of this can be seen in the fact that they contain the Knapsack problem as a subproblem, which is known to be NP-complete). This implies that optima are not in general computable unless P=NP, and if that’s not the case then actual outcomes can be arbitrarily far away from the optimum.

    So to say such a model is “not always perfectly accurate” is an understatement of Biblical proportions, it would be more accurate to say that such a model would only approach acceptable accuracy in very exceptional circumstances.

  1. Economists & ‘New Economic Thinking’ « Unlearning Economics
Follow

Get every new post delivered to your Inbox.

Join 1,021 other followers