In Praise of Econometrics

Economists often express incredulity toward people who target their criticisms at an amorphous entity called ‘economics’ (perhaps prefixed with ‘neoclassical’ or ‘mainstream’), instead of targeting specific areas of the discipline. They point out that, contrary to the popular view of economists as a group who are excessively concerned with theory, a majority of economic papers are empirical. Sometimes, even the discipline’s most vehement defenders are happy to disown the theoretical areas-such as macroeconomics-which attract the most criticism, whilst still insisting that, broadly speaking, economists are a scientifically minded bunch.

Perhaps surprisingly, I agree somewhat with this perspective. I think there is a disconnect within economics: between the core theories (neoclassical economics, or marginalism) and econometrics.* I believe the former to be logically, empirically and methodologically unsound. However, I believe the latter – though not without its problems – has all the hallmarks of a much better way to do ‘science’. There are several reasons to believe this:

First, econometrics has a far more careful approach to assumptions than marginalism. To start with, you are simply made more aware of the assumptions you use, whereas I find many are implicit in marginalist theory. Furthermore, there is extensive discussion of each individual assumption’s impact, of what happens when each assumption is relaxed, and of what we can do about it. For example: if your time-series data are not weakly stationary (loosely speaking, this means the data oscillates around the same average, with the size of the oscillations also staying, on average, roughly the same, like this) you simply cannot use Ordinary Least Squares (OLS) regression. There is no suggestion that, even though the assumption is false, we can use it as an approximation, or to highlight a key aspect of the problem, or other such hand waving. The method is simply invalidated, and we must use another method, or different data. Such an approach is refreshing and completely at odds with marginalist theory, whose proponents insist on clinging to models – and even applying them broadly – despite a wealth of absurdly unrealistic assumptions.

Second, econometrics has dealt with criticisms far better and more fundamentally than its theoretical counterpart. The most broad and pertinent criticism of econometrics was delivered by Edward Leamer in his classic paper ‘Let’s Take the Con Out of Econometrics’. Leamer highlighted the ‘identification problem’ inevitably faced by econometricians. Since econometricians try to isolate causal links, but can rarely do controlled experiments, they must pick and choose which variables they want to include in their model. Yet there are so many variables in the real world that we cannot discern, a priori, which ones are really the ‘key culprits’ in our purported causal chain, so inevitably this choice is something of a judgment call.

The result is that two different econometricians can use econometrics to paint two very different pictures, based on their choice of model. For example, David Hendry famously showed that the link between inflation and rainfall – whichever way it ran – was quite robust. Unfortunately, such absurdity can be much harder to detect in the murky waters of economic data, making purported causal links highly suspect. Leamer chastised his colleagues (and himself) for basing their choice of included variables and key assumptions on “whimsy”, making inference results highly subject to change based on the biases of the author, and which direction they (consciously or unconsciously) have pointed the data in. He pointed out that data on what exactly impacts murder rates could give wildly disparate results based on a few key decisions made by the practitioner.

However, the discipline has, in my opinion, taken the challenge seriously. In 2010, Joshua Angrist & Jörn-Steffen Pischke responded to Leamer, summing up some key changes in the way econometricians use and interpret data. I’ll briefly highlight a few of them:

(1) An increase in the use of data from quasi-randomised trials, whether intended or by ‘natural experiment’. Econometricians have increased the use of the former where they can, but real experiments are hard to come by in social sciences, so they are generally stuck with the latter. One such example of a natural experiment is the ‘differences-in-differences’ approach, which uses natural boundaries such as nation states to estimate whether certain variables are key causal factors. If the murder rate follows roughly the same trend in both the US and Canada, then the trend is surely not attributable to changes in policy. Such quasi-experiments eliminate the problem even more fundamentally than Leamer imagined it could be, by vastly improving the raw data.

(2) More common, careful use of methods intended to isolate causality, such as the use of Instrumental Variables (IV). The basic idea here is this: if we have an independent variable x, and a dependent variable y, correlation between them does not imply causation from x to y. So one way we can support the hypothesis of a causal link is by using another variable z, which influences x directly, but doesn’t influence y. In other words, z should only affect y through its influence on x, and if we find a correlation, this is consistent with the idea of a causal link.

To borrow an example from Wikipedia, consider smoking and health outcomes. We may find a correlation between smoking rates and worse health outcomes, and intuitively suppose that the causation runs from smoking. But ultimately, intuition isn’t enough. So we could use tobacco taxes – which surely affect health outcomes only because they influence smoking rates – as an instrument, and see if they are correlated with worse health outcomes. If they are, then this supports our initial hypothesis; if not, it may be an issue of reverse causation, or some third cause which impacts both smoking and health outcomes. IV and other methods like it are not exhaustive, but they certainly bring us closer to the truth, which is surely what science is about.

(3) More transparency in, and discussion of, research designs, so that results can be verified and others can (try to) replicate them. It is worth noting that, though Reinhart and Rogoff’s 90% threshold was junk science, they were exposed relatively soon after their data were made available.

The result of all these efforts is that econometrics is much more credible than it was when Leamer wrote his article in 1983 (at which time everyone seemed to agree it was fairly worthless). Hopefully it will continue to improve on this front.

A final, albeit less fundamental, reason I prefer econometrics and econometricians is that the nature of the field, with its numerous uncertainties, naturally demands a more modest interpretation of results. The rigid and hard-to-master framework of neoclassical theory often seems to give those who’ve mastered it the idea that they have been burdened with secret truths about the economy, which they are all too happy to parade on the op-ed pages of widely read papers. In contrast, you are unlikely to find Card & Krueger blithely asserting that the minimum wage has a positive effects on employment, and that anyone who disagrees with them just doesn’t understand econometrics. Perhaps this is just due to differences in the types of people that do theory versus those who do evidence, but I’d be willing to bet it is symptomatic of the generally more measured approach taken by econometricians.

The way forward?

I believe it would be a positive step for economists to opt for theoretical methods more resembling the econometric approach, preferring observed empirical regularities and basic statistical relationships to ‘rigorous’ theory. In fact, I have previously seen Steve Keen’s model referred to as ‘econometrics’, and perhaps this is broadly right in a sense. But it’s more of a compliment than an insult: ditching the straitjacket of marginalism, with its various restrictive assumptions (coupled with insistence that we simply can’t do it any other way), and heading for simple stock-flow relationships between various economic entities could well be a step forwards. It will of course seem like a step backwards to most economists, but then, highly complex models are not correct just because they are highly complex.

As for the Lucas Critique, well, statistical regularities that may collapse upon exploitation can be taken on a case-by-case basis: it’s actually not that difficult to foresee, and even the ‘Bastard-Keynesians’ saw it in the Phillips Curve (as did Keynes). Ironically, it seems economists themselves, blindly believing that they have ‘solved’ this problem, are least aware of it, having only a shallow interpretation of its implications (seemingly, as a gun that fires left). A more dynamic awareness of the relationship between policy and the economy would be a more progressive approach than being shackled by microfoundations.

I am half expecting my regular readers to point out 26723 problems with econometrics that I have not considered. To be sure, econometrics has problems: inferring causality will forever be an issue, as will the cumulative effects of the inevitable judgments calls involved in dealing with data. No doubt, econometrics is prone to misuse. However, it seems to me that most of the problems with econometrics are simply those experienced in all areas of statistics. This is at least a start: I would love, one day, to be able to say that the problems with economic theory were merely those experienced by all social sciences.

*Indeed, this blog would be more accurately titled ‘Unlearning Marginalism’, but obviously that wouldn’t be as catchy or irritating provocative.

About these ads

, ,

  1. #1 by Econ on June 21, 2013 - 2:47 pm

    Sincere question: do you know much about econometrics? Because I do and my reading of the above is that you have far, far too much faith in recent developments. This is most noticeable from your rather naive discussion of instrumental variables, on which there is now an extensive literature listing the many problems (often nevertheless ignored or confined to footnotes when convenient in empirical work). The truth is that not much has changed, dubious practices have just taken a different form. Not going into the detail now though, otherwise I won’t have anything to write myself ;)

    • #2 by Unlearningecon on June 22, 2013 - 1:59 pm

      Sincere question: do you know much about econometrics?

      Not as much as I know about economic theory, and not as much as I expect you know about econometrics, no. This post was more a ‘here’s my impression so far’ than a definitive verdict.

      Because I do and my reading of the above is that you have far, far too much faith in recent developments. This is most noticeable from your rather naive discussion of instrumental variables, on which there is now an extensive literature listing the many problem

      Do you have any good links on this? IV was really just one example, though: the most important aspect for me is the improvements in the data itself.

      Would you say econometrics is ‘as bad’ as marginalist theory? Or only about as bad as statistics in general?

      • #3 by Jan on June 23, 2013 - 12:33 am

        I agree with Econ,well i did my undergradutate years in hight of matemathical
        fundamentalism of economics,and before that studied math and logic.I think it was a real good start.To me it made me see the obvoius flaws in the all variations of econometrics inventions with all false assumptions and it was total dominant in early 80s it .I know it in out but in my view it´s use is very limited,mostly even harmful.In away it´s rather telling that often the most brilliant
        economist from J.M Keynes, Gustav Cassel to Knut Wicksell were mathematicans but hardly never used mathematics,because they thought it did more harm then good,in their view,and it was almost not appliacabel. Gunnar Myrdal almost
        never used it.He was also a highly skilled mathetical economist and got
        the job to build up the Econometric Society in London in in 1920-30s
        but noticed it was a almost dead end.He instead spend a year in Royal Brittish library and wrote his classic The Political Element in the Development of Economic Theory. (1930) that destroyed the fundaments of Marginal Utility Theory as well as Equilibrium,a very undererestimated book in my view,before it´s time.He pointed out that analyses of
        development processes, which only focus on economic factors, are irrelevant and misleading because historical, institutional, social and cultural factors also matter. He disputed the existence of a body of economic thought that is ‘objective’ in the sense that it is value-free.and he accused Econometric for ignoring the problem of distribution of wealth in its obsession with economic growth, of using faulty statistics and substituting Greek letters for missing data in its formulas and of flouting logic. He wrote, “Correlations are not explanations and besides, they can be as spurious as the high correlation in Finland between foxes killed and divorces.” I can´t see his critque in less valid today,rather more actual.

        Mark Blyth on the Danger of Mathematical Models

        Put mindless econometrics and regression analysis where they belong – in the garbage can!- Lars P Syll

        http://larspsyll.wordpress.com/2012/12/02/put-mindless-econometrics-and-regression-analysis-where-they-belong-in-the-garbage-can/

        John Maynard Keynes critque of an Tinbergen and Econometrics

        http://www.ecn.ulaval.ca/~pgon/hpe/documents/econometrie/Keynes1939.pdf

      • #4 by Unlearningecon on June 23, 2013 - 3:26 pm

        I agree in general with the point about mathematics in economics, but I feel like you haven’t fully engaged the post.

        Strictly speaking, causality is impossible to establish: we only truly ever have past correlations. The types of ‘mindless regression’ you do in your first econometrics classes are indeed spurious, as Lars discusses (I linked to him in the post); however, I discussed why I think econometricians have managed to improve their claims of finding causality through the data, various techniques and so forth. This doesn’t mean their techniques are perfect but they are certainly improving, which in my opinion is more than you can say for marginalist theory.

  2. #5 by Boatwright on June 21, 2013 - 3:16 pm

    “Yet there are so many variables in the real world that we cannot discern, a priori, which ones are really the ‘key culprits’ in our purported causal chain, so inevitably this choice is something of a judgment call.”

    One of the most interesting qualities of Keen’s modelling is the ability to add variables at will. As I understand it, Keen started with maths from hydraulic engineering. One of the rules of this sort of modelling is robustness in real world, often chaotic conditions. Solutions sought are statistical and heuristic, and above all TESTABLE and TUNE-ABLE with results from physical models, such as flow tanks and wind tunnels. The success of meteorology in modelling complex, multi-variable, intrinsically chaotic events is also notable.

    As an amateur, I am curious to know how much work is going on in grad schools to develop computer models along the lines of Prof. Keen’s work?

    • #6 by Unlearningecon on June 22, 2013 - 2:09 pm

      Yeah, Keen’s model is quite a loose, flexible framework and as such is highly promising, able to go in many directions. Economists are predisposed to view this as not ‘rigorous’ but really it’s just a far better way of doing things, generating cyclical behaviour even at the basic level.

      As an amateur, I am curious to know how much work is going on in grad schools to develop computer models along the lines of Prof. Keen’s work?

      I’m not a grad student but my impression is: very little. Generally, grad school (particularly in the US) is centered around learning DSGE, production functions, Walras etc. For example, Noah Smith recently revealed that he doesn’t know what an agent-based model is (penultimate paragraph), so I doubt many grad schools teach alternative types of models.

      • #7 by metatone on June 22, 2013 - 5:37 pm

        Yikes, I’d missed that from Noah, maybe he just wrote badly? (Lots of critiques of ABM exist, so I’m not a big believer in it.) If not, that’s a bit scary.

      • #8 by Unlearningecon on June 23, 2013 - 3:09 pm

        If you read the comments, some of his commenters point out that he misunderstood what ‘agent-based’ means (thinking it just meant the model had some sort of ‘agent’, presumably). He doesn’t correct them or disagree, so I think that he really just did not know. This isn’t his fault, of course, but it’s a symptom of quite how unpluralistic neoclassical economics can be.

  3. #9 by noteconomist on June 21, 2013 - 3:28 pm

    The reason why econometrics has the perception of having gotten better is because we have more data collection and more processing power than ever to figure out what’s BS right away.

    The points made about the distinction between neoclassical theory and the assumptions used for modelling in econometrics are very important. Too often the entirety of neoclassical economics is written off; but if viewed from a history of thought perspective, the whole era really did provide the launch pad for every framework thereafter. Not enough credit is given to those thinkers, wrong as they may have been on a number of things.

    • #10 by Unlearningecon on June 22, 2013 - 2:15 pm

      Data collection seemed to me to be the most fundamental way in which things have improved since Leamer’s article.

      Too often the entirety of neoclassical economics is written off; but if viewed from a history of thought perspective, the whole era really did provide the launch pad for every framework thereafter.

      Does econometrics have its roots in the marginalist revolution?

      In general, I’d probably be more inclined to disagree with you on this, in the sense that the marginalist revolution overshadowed the classical economists, who I believe had a far better framework.

  4. #11 by The Tea Boy on June 21, 2013 - 8:24 pm

    In response to #1, assuming you’re right in what you say about IVs (I’m covering them this summer…), surely it’s a positive sign that the shortcomings of econometric techniques are being found…presumably by econometricians!

    I’ve always been a bit puzzled (as someone with a very heterodox economic background) to the hostility of many heterodox economists to econometrics. For a start, it is (or should be) just a long word for statistics used in economics. And it’s a tool. Tools aren’t inherently good or bad, it depends how and what for you are using them.

    Obviously naive empiricism is a massive pitfall to be avoided, as is failing to be aware of your assumptions, but I don’t (as the original post says) see any more evidence of the latter in emetrics than in micro/macro/etc.

    There was a quote from one of the Cambridge econometrics teachers I saw on his website once, which I liked: something along the lines of teaching the UG metrics courses in order to equip students with the tools to spot when senior economists are trying to pull the wool over their eyes!

    • #12 by Unlearningecon on June 22, 2013 - 1:55 pm

      Yeah, econometrics often seems to be a self-evident joke for some heterodox people, but as you say it’s basically just ‘statistics in economics’, and is nowhere near on the same level as marginalism in terms of methodological absurdities. One should always be skeptical of statistics, but I don’t see anything in econometrics that discredits the discipline outright as some seem to think.

  5. #13 by Blue Aurora on June 22, 2013 - 4:16 pm

    Evidently you need to read Benoit Mandelbrot’s criticisms of the econometricians and John Maynard Keynes’s criticisms of the econometricians. Keynes was not arguing against the idea of econometrics per se, so much as assuming a priori normal distributions and then running regressions.

    Here are two books which I highly recommend reading to bolster your knowledge, Unlearningecon.

    • #14 by Unlearningecon on June 23, 2013 - 3:16 pm

      Yeah, normal distributions are certainly a questionable assumption. I’m also aware of Keynes’ ambivalence (at best) towards econometrics. I’ll add those books to the list.

      • #15 by Blue Aurora on June 24, 2013 - 6:25 am

        Keynes’s “ambivalence (at best) towards econometrics”, as you put it, were directed at the improper use of technique rather than the concept of the field itself. You have to read and understood A Treatise on Probability completely, cover-to-cover, in order to get a better idea of John Maynard Keynes’s correspondence with Jan Tinbergen and the early econometricians in the late 1930s to early 1940s time period. Reading A Treatise on Probability after reading Keuzenkamp’s book and Mandelbrot and Hudson’s book would help you understand things much more. But in the meantime…if you have access to a university library right now, I suggest downloading and reading a PDF copy of this July 1988 article published in Synthese.

        http://link.springer.com/article/10.1007/BF00869639

  6. #16 by metatone on June 22, 2013 - 5:41 pm

    I’m quite sympathetic to the Mandelbrot critique, so I’m maybe opposed to this post on a deep level, but on a higher level my real problem isn’t with econometrics practitioners. Many of them seem appropriately humble and well-intentioned and the discipline seems able to self-improve.

    The depressing bit for me today is that if you ask an econometrician for a policy recommendation, 7/10 times you’ll get some regurgitated “neoclassical macro” or “economics 101″ type statement, even in areas where their own work has been asking questions about such frameworks.

  7. #17 by thehobbesian on June 22, 2013 - 9:19 pm

    “It is worth noting that, though Reinhart and Rogoff’s 90% threshold was junk science, they were exposed relatively soon after their data were made available.”

    The term “relatively soon” may be open for dispute for the people who lived in jurisdictions where policy makers enforced austerity on them for several years using Reinhart and Rogoff’s work as one of the justifications before it was finally debunked.

    • #18 by Unlearningecon on June 23, 2013 - 3:24 pm

      I disagree that Reinhart & Rogoff was what caused the drive for austerity, or was even necessary justification for it – the powerful will reach for whatever is at hand to justify what they want to do anyway.

      However, I agree that it should have been made transparent immediately. In fairness, R & R have also been chastised for not doing this.

      • #19 by thehobbesian on June 23, 2013 - 7:34 pm

        Yes you are probably correct, I said that a little tongue in cheek, certainly the push for austerity was not solely because of Reinhart and Rogoff, but I have seen it referenced by the pro austerity crowd on several occasions.

        But it does underscore some of the problems of econometrics which theory could be used to address. As a lawyer I am familiar with forensic economics which is used in tort cases. Say a 35 year old Hispanic male with a master’s level education dies and a negligence suit is brought, when calculating losses and damages they will often run a battery of calculations to determine lost income to the family. Usually it starts from what the average person in that demographic (in this case a 35 year old Hispanic male with a master’s degree) makes in a lifetime, and then you add or subtract based on other factors thrown in about the person. Of course you don’t have to be a genius to realize that there really is no way we can know for sure what anyone could have made, but courts don’t need to be exact, they just need to be “reasonable” in their decisions and come to a damages cost that is good enough.

        Now obviously the hard sciences want to be exacting, but law and policy doesn’t have to be. And economics, even when using econometrics, is often dealing in things that can never be verified. But yet courts and government agencies rely on calculations of things like the value of a human life every day. And if you look at it, in exact terms, if you are trying to calculate the economic worth of a person who died, there are so many assumptions you make that it is rather ridiculous. However that doesn’t mean we should refrain from making such things. Economics is never going to be engineering or astro-physics, there is always an element to it that is based in rule of thumb calculations. And because of that, its always good to have theory to keep the calculations from running away from themselves and becoming further detached from reality. Because theory doesn’t have to be a full proof calculation, evolution is not like that, it is just something that interprets the data and helps put it into an overall framework.

  8. #20 by John Bragin on June 25, 2013 - 5:53 pm

    Further to the comments on Mandelbrot and Keynes: Econometrics, like statistics in all of the social sciences, is shackled to the Normal Distribution. Yet the Normal Distribution is the least normal in biology and society. The normal distribution depends on independent, identically distributed individuals. Independence being the key attribute. Only a vanishingly small number of traits are independent, IQ being one of them. Interdependent-interaction is far more prevalent and this gives rise to skewed distributions, such as power law signatures. But one reason social science statisticians cling so tightly to the normal distribution is that with skewed distributions the standard deviation does not make much sense; and one cannot use regular procedures such as inferential statistics and hypothesis testing. In addition, that favored assumption of equilibrium amongst economists must also be jettisoned. (There are, of course, other reasons to jettison this assumption.) Wealth; income; excursions (up and down) in financial markets; city sizes and many other economic and financial quantities are all characterized by skewed distributions. A number of complex systems scientists are working these days to develop new statistical methods to treat skewed distributions, but few economists, including those in econometrics (particularly in academia) are willing to embrace complexity economics. Even if they admit that non-normalties exist, they tend to use the normal distribution as close-enough, or claim that the cases of non-normal distributions are taken care of by the central limit theorem, that is, given a very large number of individuals a situation approaches a normal distribution.

    • #21 by The Tea Boy on June 26, 2013 - 5:36 pm

      John

      Firstly, the Central Limit Theorem does not say that “given a very large number of individuals a situation approaches a normal distribution”. It states that as the number of them increases, the distribution of sample means will approach the normal. This is demonstrably true, whatever the underlying population distribution.

      Secondly, the majority of new econometric work is non- or semi-parametric work which does not rely on prior assumptions about the distributions of data sets. Saying that econometrics is shackled to the normal is simply inaccurate (or, at best, outdated). Certainly less so than the micro and macro papers which blithely repeat old nostrums about what is and isn’t ‘statistically significant’.

      • #22 by Unlearningecon on June 27, 2013 - 6:54 pm

        I do worry about how much of the ‘path breaking’ developments in econometrics filter down to the majority of students/graduates who have been trained in it, or who advise policy. It’s all very well if academics ‘know’ all the flaws with the normal distribution, but if these flaws are not made well known at every level, it’s still a problem. I mean, we wouldn’t release engineering graduates without adding friction to their models.

        Furthermore, even amongst those who are exposed to this, there may be a tendency to view normal distributions as the baseline or default. For these reasons I feel John’s concerns are valid, though perhaps not exhaustive.

      • #23 by The Tea Boy on June 27, 2013 - 8:06 pm

        Hmm, sort of agree.

        In general that’s very true. Of course, those at the cutting edge in economics know humans are complex, but students still leave university with a BSc thinking in terms of IS-LM or AS-AD or whatever: independent central bank good, labour unions bad, public sector reform etc etc. Totally agree with that criticism of the discipline as a whole.

        But I’m not at the cutting edge of anything! Just finished a diploma at mid-ranking UK dept, starting MSc in the autumn, and yet I know to handle assumptions about distributions with the caution described above, so I don’t think it’s fair to say that it’s just academics who know you can’t apply the Normal distribution to every situation.

  9. #24 by John Bragin on June 27, 2013 - 7:27 pm

    To “The Tea Boy”:

    I do understand how the CLT works. My implication was that it is very often “over-used” (outside its strictly technical meaning) to justify a normal distribution view of the world in social science, qualitatively as well as quantitatively.

    Since I am not one who follows the newest work in econometrics — I got turned off (understandably) some years back, I did not know that much of it now is semi or non parametric. So thanks for the heads-up on this.

    I should also add that there has been a power law craze which is only now cooling down. Some workers have seen power laws everywhere and so we were in danger of replacing one ism (PLs) for the older ism (NDs). In many cases it may just be that no distribution fits and we should just rely on a non-parametric approach. Even though whatever it may be it is certainly a high skewed situation.

    UE: I think there is a case for looking at the ND as a starting point (default, if you will). Always keeping in mind that it is only a starting point. Going by the principle that one starts with the simplest reasonable model (if the ND seems reasonable in any particular case) and then makes it more complex (realistic) as one goes along. So I think there is a difference between a default (simple) and a baseline model. But starting with a ND model should only be done with caution and only as a starting point.

    • #25 by The Tea Boy on June 27, 2013 - 8:10 pm

      Just to clarify – I wrote the last reply above before seeing this! I would agree of course about inappropriate reliance on Normal (or any other) distribution, but just get the impression it’s not as prevalent as maybe it used to be.

  1. In Praise of Econometrics | Fifth Estate
Follow

Get every new post delivered to your Inbox.

Join 1,003 other followers