Debunking Economics, Part VI: Assumptions, Assumptions, Assumptions

I have discussed the use and abuse of assumptions in economics a few times, making some headway but often struggling to define exactly what constitutes a ‘good’ and ‘bad’ assumption.

Chapter 8 of Steve Keen’s Debunking Economics channels a paper (it’s short, and worth reading) by the Philosopher Alan Musgrave, which distinguishes between the 3 types of assumptions: negligibility, domain and heuristic.

According to Friedman’s 1953 essay, theories are significant when they “explain much by little,” and to this end “will be found to have assumptions that are wildly unrealistic…in general, the more significant the theory, the more unrealistic the assumptions.” By distinguishing between the different types of assumption Musgrave shows how Friedman misunderstands the scientific method, and that his argument is only partially true for one type: negligibility assumptions, which we will look at first.

Negligibility

Neglibility assumptions simply eliminate a specific aspect of a system – friction, for example – when it is not significant enough to have a discernible impact. Friedman is correct to argue that these assumptions should be judged by their empirical corroboration, but he is wrong to say that they are necessarily ‘unrealistic’ – if air resistance is negligible then it is in fact realistic to assume a vacuum. I don’t regard many economic assumptions as fitting into this category, though many of the examples Friedman argues a theory would need to be ‘truly’ realistic, such as eye colour, fit the bill.

Domain

If a theory fails to corroborate with the evidence, it may be because the phenomenon under investigation does require that air resistance is taken into account. So the previous theory becomes a ‘domain’ theory, for which the conclusions only apply as long as the assumption of a vacuum applies. Contrary to Friedman, the aim of ‘domain’ assumptions is to be realistic and wide ranging, so that the theory may be useful in as many situations as possible. Many of the assumptions in economics are incredibly restrictive in this sense, such as assuming equilibrium, neutrality of money or ergodicity.

Heuristic

A heuristic assumption is a counterfactual proposition about the nature of a system, used to investigate it in the hope of moving on to something better. These can also be retained to guide students through the process of learning about the system. If a domain assumption is never true, then it may transform into a heuristic assumption, as long as there is an eye to making the theory more realistic at a later stage. The way Piero Sraffa builds up his theory of production is a good demonstration of this approach: starting with a few firms, no profit, no labour, and ending up with multiple firms with different types of capital and labour. In this sense many economic models are half-baked, in that they retain assumptions that are unrealistic for phenomena that are not ‘negligible,’ even at a high level.

Musgrave colourfully describes the evolution of scientific assumptions:

what in youth was a bold and adventurous negligibility assumption, may be reduced in middle-age to a sedate domain assumption, and decline in old-age into a mere heuristic assumption.

Musgrave is partially wrong in this formulation, in my opinion – assumptions can start out as heuristics and become domain later on, such as perfect gas or optimising bacteria. But there are always strict criteria for when the theory built on the assumption simply becomes useless, and there is always a view to discarding the heuristic when something better comes along. Economic theory tends to weave between the different types of assumptions without realizing/drawing attention to them.

Keen ironically notes assumptions obviously matter to economists – they just have to be Lucas Approved™. The reaction by many neoclassical journals to papers such as his, which do not toe the party line with assumptions, demonstrates his point effectively. He also points out that, in fairness to neoclassical economists, the hard sciences are not necessarily the humble havens they are made out to be, and to this day physicists are resistant to questioning accepted theories. However, it is true that economists seem to be more vehement in the face of contradictory evidence than anywhere else.

I see this as a case closed on Friedman’s methodology. Economists need to learn to draw attention to exactly which type of assumption they are making in order for the science to progress, else risk having no clear parameters for where a theory should be headed, and under which conditions it can be considered valid.

About these ads

, , , , ,

  1. #1 by Roy McPhail on August 4, 2012 - 2:41 pm

    Makes sense. We need to routinely identify, analyze, and reaffirm (or change) our assumptions each time we encounter challenging information. And we need to be keen to seek out challenging information, not hide from it. Else we become mired in confirmation bias.

  2. #2 by Blue Aurora on August 5, 2012 - 10:05 am

    Does Steve Keen cite other philosophers besides Musgrave for his criticism of Friedman’s methodology, like say, Imre Lakatos? Any competent philosopher can point out that Friedman’s methodology is highly flawed.

    Regarding the physicists…have you finally reached the parts where Steve Keen talks about the econophysics project? What do you make of econophysics in light of Keen’s discussion of them, Unlearningecon?

    • #3 by Unlearningecon on August 13, 2012 - 12:24 pm

      Yes he mentions Lakatos and the degenerative/progressive research distinction.

      He has mentioned the econophysicists briefly in his maths chapter – the irony of them attacking economists for not using advanced enough maths when economists tend to dismiss critics as ‘math shy.’ However I have not reached the section where he discusses alternatives, yet.

  3. #4 by Isaac "Izzy" Marmolejo on August 7, 2012 - 7:45 pm

    What really satisfies the Lucas critique? For example, we are told that rational expectations is put into macro models because of what the Lucas critique implies (the RBC or the DSGE ate examples), but we both know that rational expectations hardly makes these models any more useful when talking about the real world.

    Overall though, the Lucas critique is just a cheaper form from what Keynes, Shackle, etc implied about uncertainty because at least the later group’s framework quite obviously thought that a rational expectation assumption is just absurd and would couldn’t be applied using their framework.

    • #5 by Blue Aurora on August 8, 2012 - 2:48 am

      Indeed Isaac, you are correct that the Lucas Critique is actually a footnote to Keynes’s criticisms of econometrics. Just read the correspondence between Keynes and the econometricians in the late 1930ies to the 1940ies. Keynes’s criticisms of non-homogeneity actually go back to his A Treatise on Probability.

      A good book discussing the history of econometrics would be Hugo A. Keuzenkamp’s Probability, Econometrics, and Truth, a book highly recommended by Dr. Michael Emmett Brady.

      Please see Dr. Michael Emmett Brady’s book review of that book.

      http://www.amazon.com/review/R3VENX8RPWEQXH/

      Also Isaac, did you get my e-mail containing the attachments of published papers by Dr. Michael Emmett Brady? If so, could you please respond to it?

    • #6 by Unlearningecon on August 13, 2012 - 12:27 pm

      Yes, this is the approach I take with economists. Nothing satisfies the Lucas Critique, and to be honest considering neoclassical micro is based on preference driven individuals – preferences being something that obviously change with political decisions – it is pretty vulnerable itself.

  1. An FAQ for Mainstream/Neoclassical Economists « Unlearning Economics
Follow

Get every new post delivered to your Inbox.

Join 845 other followers