Posts Tagged Assumptions
Many economists will admit that their models are not, and do not resemble, the real world. Nevertheless, when pushed on this obvious problem, they will assert that reality behaves as if their theories are true. I’m not sure where this puts their theories in terms of falsifiability, but there you have it. The problem I want to highlight here is that, in many ways, the conditions in which economic assumptions are fulfilled are not interesting at all and therefore unworthy of study.
To illustrate this, consider Milton Friedman’s famous exposition of the as if argument. He used the analogy of a snooker player who does not know the geometry of the shots they make, but behaves in close approximation to how they would if they did make the appropriate calculations. We could therefore model the snooker player’s game by using such equations, even though this wouldn’t strictly describe the mechanics of the game.
There is an obvious problem with Friedman’s snooker player analogy: the only reason a snooker game is interesting (in the loosest sense of the word, to be sure) is that players play imperfectly. Were snooker players to calculate everything perfectly, there would be no game; the person who went first would pot every ball and win. Hence, the imperfections are what makes the game interesting, and we must examine the actual processes the player uses to make decisions if we want a realistic model of their play. Something similar could be said for social sciences. The only time someone’s – or society’s – behaviour is really interesting is when it is degenerative, self destructive, irrational. If everyone followed utility functions and maximised their happiness making perfectly fungible trade offs between options on which they had all available information, there would be no economic problem to speak of. The ‘deviations’ are in many ways what makes the study of economics worthwhile.
I am not the first person to recognise the flaw in Friedman’s snooker player analogy. Paul Krugman makes a similar argument in his book Peddling Prosperity. He argues that tiny deviations from rationality – say, a family not bothering to maximise their expenditure after a small tax cut because it’s not worth the time and effort – can lead to massive deviations from an economic theory. The aforementioned example completely invalidates Ricardian Equivalence. Similarly, within standard economic theory, downward wage stickiness opens up a role for monetary and fiscal policy where before there was none.
If such small ‘deviations’ from the ‘ideal’ create such significant effects, what is to be said of other, more significant ‘deviations’? Ones such as how the banking system works; how firms price; behavioural quirks; the fact that marginal products cannot be well-defined; the fact that capital can move across borders, etc etc. These completely undermine the theories upon which economists base their proclamations against the minimum wage, or for NGDP targeting, or for free trade. (Fun homework: match up the policy prescriptions mentioned with the relevant faulty assumptions).
I’ll grant that a lot of contemporary economics involves investigating areas where an assumption – rationality, perfect information, homogeneous agents - is violated. But usually this is only done one at a time, preserving the other assumptions. However, if almost every assumption is always violated, and if each violation has surprisingly large consequences, then practically any theory which retains any of the faulty assumptions will be wildly off track. Consequently, I would suggest that rather than modelling one ‘friction’ at a time, the ‘ideal’ should be dropped completely. Theories could be built from basic empirical observations instead of false assumptions.
I’m actually not entirely happy with this argument, because it implies that the economy would behave ‘well’ if everyone behaved according to economist’s ideals. All too often this can mean economists end up disparaging real people for not conforming to their theories, as Giles Saint-Paul did in his defence of economics post-crisis. The fact is that even if the world did behave according to the (impossible) neoclassical ‘ideal’, there would still be problems, such as business cycles, due to emergent properties of individually optimal behaviour. In any case, economists should be wary of the as if argument even without accepting my crazy heterodox position.
The fact is that reality doesn’t behave ‘as if’ it is economic theory. Reality behaves how reality behaves, and science is supposed to be geared toward modelling this as closely as possible. Insofar as we might rest on a counterfactual, it is only intended when we don’t know how the system actually works. Once we do know how the system works – and in economics, we do, as I outlined above – economists who resist altering their long-outdated heuristics risk avoiding important questions about the economy.
I have previously discussed Milton Friedman’s infamous 1953 essay, ‘The Methodology of Positive Economics.’ The basic argument of Friedman’s essay is the unrealism of a theory’s assumptions should not matter; what matters are the predictions made by the theory. A truly realistic economic theory would have to incorporate so many aspects of humanity that it would be impractical or computationally impossible to do so. Hence, we must make simplifications, and cross check the models against the evidence to see if we are close enough to the truth. The internal details of the models, as long as they are consistent, are of little importance.
The essay, or some variant of it, is a fallback for economists when questioned about the assumptions of their models. Even though most economists would not endorse a strong interpretation of Friedman’s essay, I often come across the defence ’it’s just an abstraction, all models are wrong’ if I question, say, perfect competition, utility, or equilibrium. I summarise the arguments against Friedman’s position below.
The first problem with Friedman’s stance is that it requires a rigorous, empirically driven methodology that is willing to abandon theories as soon as they are shown to be inaccurate enough. Is this really possible in economics? I recall that, during an engineering class, my lecturer introduced us to the ‘perfect gas.’ He said it was unrealistic but showed us that it gave results accurate to 3 or 4 decimal places. Is anyone aware of econometrics papers which offer this degree of certainty and accuracy? In my opinion, the fundamental lack of accuracy inherent in social science shows that economists should be more concerned about what is actually going on inside their theories, since they are less liable to spot mistakes through pure prediction. Even if we are willing to tolerate a higher margin of error in economics, results are always contested and you can find papers claiming each issue either way.
The second problem with a ‘pure prediction’ approach to modelling is that, at any time, different theories or systems might exhibit the same behaviour, despite different underlying mechanics. That is: two different models might make the same predictions, and Friedman’s methodology has no way of dealing with this.
There are two obvious examples of this in economics. The first is the DSGE models used by central banks and economists during the ‘Great Moderation,’ which predicted the stable behaviour exhibited by the economy. However, Steve Keen’s Minsky Model also exhibits relative stability for a period, before being followed by a crash. Before the crash took place, there would have been no way of knowing which model was correct, except by looking at internal mechanics.
Another example is the Efficient Market Hypothesis. This predicts that it is hard to ‘beat the market’ – a prediction that, due to its obvious truth, partially explains the theory’s staying power. However, other theories also predict that the market will be hard to beat, either for different reasons or a combination of reasons, including some similar to those in the EMH. Again, we must do something that is anathema to Friedman: look at what is going on under the bonnet to understand which theory is correct.
The third problem is the one I initially honed in on: the vagueness of Friedman’s definition of ‘assumptions,’ and how this compares to those used in science. This found its best elucidation with the philosopher Alan Musgrave. Musgrave argued that assumptions have clear-if unspoken-definitions within science. There are negligibility assumptions, which eliminate a known variable(s) (a closed economy is a good example, because it eliminates imports/exports and capital flows). There are domain assumptions, for which the theory is only true as long as the assumption holds (oligopoly theory is only true for oligopolies).
There are then heuristic assumptions, which can be something of a ‘fudge;’ a counterfactual model of the system (firms equating MC to MR is a good example of this). However, these are often used for pedagogical purposes and dropped before too long. Insofar as they remain, they require rigorous empirical testing, which I have not seen for the MC=MR explanation of firms. Furthermore, heuristic assumptions are only used if internal mechanics cannot be identified or modeled. In the case of firms, we do know how most firms price, and it is easy to model.
The fourth problem is related to above: Friedman is misunderstanding the purpose of science. The task of science is not merely to create a ‘black box’ that gives rise to a set of predictions, but to explain phenomena: how they arise; what role each component of a system fills; how these components interact with each other. The system is always under ongoing investigation, because we always want to know what is going on under the bonnet. Whatever the efficacy of their predictions, theories are only as good as their assumptions, and relaxing an assumption is always a positive step.
Consider the following theory’s superb record for prediction about when water will freeze or boil. The theory postulates that water behaves as if there were a water devil who gets angry at 32 degrees and 212 degrees Fahrenheit and alters the chemical state accordingly to ice or to steam. In a superficial sense, the water-devil theory is successful for the immediate problem at hand. But the molecular insight that water is comprised of two molecules of hydrogen and one molecule of oxygen not only led to predictive success, but also led to “better problems” (i.e., the growth of modern chemistry).
If economists want to offer lucid explanations of the economy, they are heading down the wrong path (in fact this is something employers have complained about with economics graduates: lost in theory, little to no practical knowledge).
The fifth problem is one that is specific to social sciences, one that I touched on recently: different institutional contexts can mean economies behave differently. Without an understanding of this context, and whether it matches up with the mechanics of our models, we cannot know if the model applies or not. Just because a model has proven useful in one situation or location, it doesn’t guarantee that it will useful elsewhere, as institutional differences might render it obsolete.
The final problem, less general but important, is that certain assumptions can preclude the study of certain areas. If I suggested a model of planetary collision that had one planet, you would rightly reject the model outright. Similarly, in a world with perfect information, the function of many services that rely on knowledge-data entry, lawyers and financial advisors, for example-is nullified. There is actually good reason to believe a frictionless world such as the one at the core of neoclassicism leaves the role of many firms and entrepreneurs obsolete. Hence, we must be careful about the possibility of certain assumptions invalidating the area we are studying.
In my opinion, Friedman’s essay is incoherent even on its own terms. He does not define the word ‘assumption,’ and nor does he define the word ‘prediction.’ The incoherence of the essay can be seen in Friedman’s own examples of marginalist theories of the firm. Friedman uses his new found, supposedly evidence-driven methodology as grounds for rejecting early evidence against these theories. He is able to do this because he has not defined ‘prediction,’ and so can use it in whatever way suits his preordained conclusions. But Friedman does not even offer any testable predictions for marginalist theories of the firm. In fact, he doesn’t offer any testable predictions at all.
Friedman’s essay has economists occupying a strange methodological purgatory, where they seem unreceptive to both internal critiques of their theories, and their testable predictions. This follows directly from Friedman’s ambiguous position. My position, on the other hand, is that the use and abuse of assumptions is always something of a judgment call. Part of learning how to develop, inform and reject theories is having an eye for when your model, or another’s, has done the scientific equivalent of jumping the shark. Obviously, I believe this is the case with large areas of economics, but discussing that is beyond the scope of this post. Ultimately, economists have to change their stance on assumptions if heterodox schools have any chance of persuading them.
I’m not sure what it is about economics that makes both its adherents and its detractors feel the need to make constant analogies to other sciences, particularly physics, to try to justify their preferred approach. Unfortunately, this problem isn’t just a blogosphere phenomenon; it appears in every area of the field, from blogs to articles to widely read economics textbooks.
For example, not too infrequently I will see a comment on heterodox work along the lines of “Newton’s theories were debunked by Einstein but they are still taught!!!!” Being untrained in physics (past high school) myself, I am grateful to have commenters who know their stuff, and can sweep aside such silly statements. In the case of this particular argument, the fact is that when studying everyday objects, the difference between Newton’s laws, quantum mechanics and general relativity is so demonstrably, empirically tiny that they effectively give the same results.
So even though quantum mechanics teaches us that in order to measure the position of a particle you must change its momentum, and that in order to measure its momentum you must change its position, the size of these ‘changes’ on every day objects is practically immeasurable. Similarly, even though relativity teaches us that the relative speed of objects is ‘constrained’ by the universal constant, the effect on everyday velocities is too small to matter. Economists are simply unable to claim anything close to this level of precision or empirical corroboration, and perhaps they never will be, due the fact that they cannot engage in controlled experiments.
If you ask an astronomer how far a particular star is from our sun, he’ll give you a number, but it won’t be accurate. Man’s ability to measure astronomical distances is still limited. An astronomer might well take better measurements and conclude that a star is really twice or half as far away as he previously thought.
Mankiw’s suggestion astronomers have this little clue what they are doing is misleading. We are talking about people who can calculate the existence of a planet close to a distant star, based on the (relatively) tiny ‘wobble’ of said star. Astronomers have many different methods for calculating stellar distances: parallax, red shift, luminosity; and these methods can be used and cross-checked against one another. As you will see from the parallax link, there are also in-built, estimable errors in their calculations, which can help them straying too far off the mark.
While it is true that at large distances, luminosity can be hard to interpret (a star may be close and dim, or bright and far away) Mankiw is mostly wrong. Astronomers still make many, largely accurate predictions, while economist’s predictions are at best contested and uncertain, or worse, incorrect. The very worst models are unfalsifiable, such as the NAIRU Mankiw is defending, which seems to move around so much that it is meaningless.
In the physical world, there is ‘no such thing’ as a frictionless plane or a perfect vacuum.
Perhaps not, but all these assumptions do is eliminate a known mathematical variable. This is not the same as positing an imaginary substance (utility) just so that mathematics can be used; or assuming that decision makers obey axioms which have been shown to be false time and time again; or basing everything on the impossible fantasy of perfect competition, which the authors go on to do all at once. These assumptions cannot be said to eliminate a variable or collection of variables; neither can it be said that, despite their unrealism, they display a remarkable consistency with the available evidence.
Even if we accept the premise that these assumptions are merely ‘simplifying,’ the fact remains that engineers or physicists would not be sent into the real world without friction in their models, because such models would be useless - in fact, in my own experience, friction is introduced in the first semester. Jehle and Reny do go on to suggest that one should always adopt a critical eye toward their theories, but this is simply not enough for a textbook that calls itself ‘advanced.’ At this level such blatant unrealism should be a thing of the past, or just never have been used at all.
Economics is a young science, so it is natural that, in search of sure footing, people draw from the well respected, well grounded discipline of physics. However, not only do such analogies typically demonstrate a largely superficial understanding of physics, but since the subjects are different, analogies are often stretched so far that they fail. Analogies to other sciences can be useful to check one’s logic, or as illuminating parables. However, misguided appeals to and applications of other models are not sufficient to justify economist’s own approach, which, like other sciences (!), should stand or fall on its own merits.
Chapter 8 of Steve Keen’s Debunking Economics channels a paper (it’s short, and worth reading) by the Philosopher Alan Musgrave, which distinguishes between the 3 types of assumptions: negligibility, domain and heuristic.
According to Friedman’s 1953 essay, theories are significant when they “explain much by little,” and to this end “will be found to have assumptions that are wildly unrealistic…in general, the more significant the theory, the more unrealistic the assumptions.” By distinguishing between the different types of assumption Musgrave shows how Friedman misunderstands the scientific method, and that his argument is only partially true for one type: negligibility assumptions, which we will look at first.
Neglibility assumptions simply eliminate a specific aspect of a system – friction, for example – when it is not significant enough to have a discernible impact. Friedman is correct to argue that these assumptions should be judged by their empirical corroboration, but he is wrong to say that they are necessarily ‘unrealistic’ – if air resistance is negligible then it is in fact realistic to assume a vacuum. I don’t regard many economic assumptions as fitting into this category, though many of the examples Friedman argues a theory would need to be ‘truly’ realistic, such as eye colour, fit the bill.
If a theory fails to corroborate with the evidence, it may be because the phenomenon under investigation does require that air resistance is taken into account. So the previous theory becomes a ‘domain’ theory, for which the conclusions only apply as long as the assumption of a vacuum applies. Contrary to Friedman, the aim of ‘domain’ assumptions is to be realistic and wide ranging, so that the theory may be useful in as many situations as possible. Many of the assumptions in economics are incredibly restrictive in this sense, such as assuming equilibrium, neutrality of money or ergodicity.
A heuristic assumption is a counterfactual proposition about the nature of a system, used to investigate it in the hope of moving on to something better. These can also be retained to guide students through the process of learning about the system. If a domain assumption is never true, then it may transform into a heuristic assumption, as long as there is an eye to making the theory more realistic at a later stage. The way Piero Sraffa builds up his theory of production is a good demonstration of this approach: starting with a few firms, no profit, no labour, and ending up with multiple firms with different types of capital and labour. In this sense many economic models are half-baked, in that they retain assumptions that are unrealistic for phenomena that are not ‘negligible,’ even at a high level.
Musgrave colourfully describes the evolution of scientific assumptions:
what in youth was a bold and adventurous negligibility assumption, may be reduced in middle-age to a sedate domain assumption, and decline in old-age into a mere heuristic assumption.
Musgrave is partially wrong in this formulation, in my opinion – assumptions can start out as heuristics and become domain later on, such as perfect gas or optimising bacteria. But there are always strict criteria for when the theory built on the assumption simply becomes useless, and there is always a view to discarding the heuristic when something better comes along. Economic theory tends to weave between the different types of assumptions without realizing/drawing attention to them.
Keen ironically notes assumptions obviously matter to economists – they just have to be Lucas Approved™. The reaction by many neoclassical journals to papers such as his, which do not toe the party line with assumptions, demonstrates his point effectively. He also points out that, in fairness to neoclassical economists, the hard sciences are not necessarily the humble havens they are made out to be, and to this day physicists are resistant to questioning accepted theories. However, it is true that economists seem to be more vehement in the face of contradictory evidence than anywhere else.
I see this as a case closed on Friedman’s methodology. Economists need to learn to draw attention to exactly which type of assumption they are making in order for the science to progress, else risk having no clear parameters for where a theory should be headed, and under which conditions it can be considered valid.
I like to question almost every aspect of economic orthodoxy. However, I am also interested in forming a coherent view of what is actually wrong with economics, rather than a caricature. So it pains me to see misguided criticisms such as Suzanne Moore’s piece a few weeks back, whose characterisations of economic theory will only serve to misguide the uninformed and elicit dismissive reactions from economists themselves. So here I present a list of things not to highlight when attacking neoclassical economics, in the hope of assisting would-be critics of the discipline.
Criticising early assumptions
Don’t get me wrong, criticising economist’s perversion of the use of assumptions is fair game. However, critics often go down the path of criticising ’pure rationality’ or ‘perfect information.’ Whilst these are elements of core models (and these models should be attacked because of this, but with the caveat that the core models are the target), they are generally not found in the higher echelons of economics. Many of these assume imperfect information, bounded rationality, and can also incorporate other biases.
Most specifically, idea that economic theory assumes everybody is a selfish, emotionless self-maximiser is common trope, but as Chris Dillow noted in the link above, it’s not entirely true. More importantly, it is also defensible as an assumption – a heuristic by which to approximate behaviour, at least until something better comes along. It is important to distinguish between good and bad assumptions from a scientific standpoint, rather than how absurd they appear to be at first glance.
Many critics of economics, including well-informed ones, make the mistake of arguing that economics always assumes the economy is in equilibrium, tending to equilibrium, oscillating closely around equilibrium, or something along these lines. It is true that many economic models do this; it is also true that economic models start from the assumption that the economy is in equilibrium, and see what happens from there. However, economists generally mean something very different to other scientists when they say equilibrium. From the horse’s mouth:
An equilibrium in an economic model is characterized by two basic conditions which hold in all of the model’s time periods: i) all agents in the model solve the maximization problems implied by their preferences, resource constraints, information sets, etc; and ii) markets for all goods in the model “clear.” An equilibrium is not a snapshot of the model economy at one point in time. Instead, it is the model’s entire time path.
Even on first inspection, this type of equilibrium clearly has problems of its own, but I will save them for another post. The important thing to remember is that this, rather than a stable state, is what economists often mean when they talk about equilibrium.
Economist’s Political Beliefs
Economists are not all free marketeers – in fact, they generally lean to the left. Neoclassical economics, broadly speaking, concludes that we should: regulate oligopolies, monopolies and banking; do more to protect the environment and intervene in the case of other externalities; have some public provision of health, education and welfare; and as that survey shows, economists are generally approving of things such as safety regulations.
As I have said before, I think economic theory as taught lends itself to being used by free marketeers, because of the way the ‘market’ is presented as natural and the government ‘intervenes.’ I also object to the fact that economics applies the same analysis to every market from apples to education to labour. And it is true that the market is presented as generally equilibrating and efficient, except in a few choice cases. However, the impact of these things is not that all economists support ‘right wing’ policy prescriptions, but that neoclassical theory can generally be coopted to provide justification for them.
Naturally, I sympathise with many who try to criticise economics, as they correctly notice that the field’s empirical record (at least in macro) is not great; that many of the policy prescriptions seem to favour the rich whilst hurting the poor; that some of the models taught in economics are unintuitive and perverse. Economists are also partly to blame for not communicating their discipline well to the public, seemingly preferring to dismiss critics as ignorant and revel in their mastery of (what I consider a wholly useless) field.
Having said this, it’s important that in order to engage economists properly, the right questions are asked – ones that economists find difficult to answer. Economists have stock responses to many of the ‘pop’ criticisms of their discipline, so using them will only serve to reinforce economist’s belief in their own knowledge and create further barriers to engagement.
Of course, failing all of this, you can just repeat ‘why didn’t you see the crisis coming?’ over and over.
Recently, a thought occurred to me regarding the disconnect between economic models/assumptions and the economy they purport to represent. It is demonstrated aptly by a quote, via Jonathan Catalan, from Frank Knight:
To begin with a general abstract answer, it will be evident to anyone with a rudimentary understanding of economic processes and analysis that profit (always in the sense of pure profit) would be absent under the conditions of equilibrium with “perfect competition,” (which may be defined in more than one way). The”tendency” of the competitive processes of buying and selling and the control of production is to impute the whole product to the productive agencies which create it, leaving nothing for entrepreneurship as a distinct function (except for monopoly gain, referred to below). This means that under the conditions of ideal equilibrium (stationary or moving) the function of entrepreneurship itself is entirely absent from the economy.
This isn’t the only time that economic assumptions undermine themselves. For example, another problem with perfect competition is that it assumes everyone is a ‘price taker’; that is, they cannot set prices themselves. But if everybody is a ‘price taker’ and nobody can be a ‘price maker’, how is there a price?
The assumption of perfect information also undermines the study of the economy, for it assumes away most real world services. Obvious examples are pure data processing companies: if everybody had access to, and the capacity to retain, information on this level, then the companies would simply not exist.
Furthermore, many services are born because of
information asymmetries lack of information. If you hire a lawyer, it’s primarily because you don’t have a comprehensive knowledge of the law; if you hire a stockbroker, it’s because you don’t know what to do with your stocks, or how to do it; if you use a teacher, it’s because you don’t know something that you want to know.
Rationality also potentially undermines entrepreneurship. Consider this quote from blogger Matt Sherman:
The process of going from nothing to something…is inherently irrational…To embark on it is to leave the world of economic modelling…[P]rogress requires madness, that is, the freedom to pursue choices whose rationality can’t be measured.
A rational, reasonably emotionless, utility maximising individual, when faced with the choice between steady wage income – which they can casually trade off against leisure as they please – and the alternative of highly volatile and uncertain profits, would clearly opt for the former.
Perfect competition, perfect information and pure rationality are not always used in the higher echelons of modern economics, but that’s not the point. The fact is that they are often used as starting points, and are still taught in most courses, despite their clear incoherence.
Capitalist economies thrive on the inefficiencies and ‘frictions’ presumed to be the only obstacle to the economy functioning ‘efficiently’, in the sense of economics textbooks. Should you remove all these ‘frictions’, it seems that the foundations of economic theory would leave us in a world with no firms, no entrepreneurship and few business opportunities. In other words, large portion of economics could barely be said to be a theory of capitalism.
My previous post on assumptions was not quite rigorous enough in its definition of assumptions, and attracted some skeptical feedback from the commenter named isomorphisms. Allow me to reiterate my point more clearly.
The distinction between hypotheses and assumptions was intuitively appealing, but of course all assumptions could be said to be hypotheses in a sense. However, I think most scientists would agree that a useful assumption has definitive characteristics, even if it’s difficult to pin down exactly what those are. I think they’d also agree that counter factual prepositions about the mechanics of a system are not useful assumptions. So what are?
At their heart, assumptions are intended to simplify analysis – this is an oft-used defence of economists. But the crucial way in which assumptions are able to do this is by eliminating a specific complication. Of course, this alone is not a sufficient condition. Assumptions also need to have a clear impact on the analysis, too, so we can be sure what happens when they are relaxed.
How many economic assumptions meet these two criteria?
Firms equating marginal cost to marginal revenue certainly doesn’t, as it’s a preposition about the nature of the firm, rather than an assumption that simplifies the nature of the problem – in fact, cost-plus pricing is far easier to calculate and also appears to be used far more widely used.
Perfect information can’t be said to eliminate a specific complication – it’s simplifying in a sense, but it potentially ‘simplifies’ the analysis to the point of undermining it, hence creating its own complications (you’d eliminate most real-world firms). Analysis is entirely possible without this assumption – ‘Schumpeterian’ economics uses imperfect information to its advantage.
Rational self maximisation, on the other hand, is a good example of an assumption that is defensible, as it allows us to simplify how people make decisions and has clear implications. Furthermore, it can easily be modified to include behavioural characteristics such as loss aversion (though economists seem unwilling to do this any time soon).
I stand by the idea that assumptions are an appropriate target for criticising economics, and feel this is a much more coherent and useful definition of what makes a good or bad assumption.
It is a major bone of contention of mine that the word ‘assumption’ is used interchangeably when in many cases it should be replaced with ‘hypothesis’ in economics – for example, that firms equate MR=MC is a hypothesis that can be falsified in its own right, rather than an ‘assumption’ in the purely scientific sense of the word.
Economists enjoy demonstrating that they don’t understand the difference between a good and a bad assumption. For example, here are the SuperFreakonomics guys:
There are some 237 million Americans sixteen and older; all told, that’s 43 billion miles walked each year by people of driving age. If we assume that 1 out of every 140 of those miles are walked drunk — the same proportion of miles that are driven drunk — then 307 million miles are walked drunk each year.
Convenient if you can’t be bothered to do your research, but scientifically worthless. This is a hypothesis about how people behave, and the analysis follows directly from there. If the hypothesis is wrong, the analysis is simply wrong and we need to start over.
Now, here’s Scott Sumner on the Diamond and Saez ‘Marginal Tax rates’ paper:
And S-D also seem to lean toward the “assume a can opener” school of policy analysis:
“In the current tax system with many tax avoidance opportunities at the higher end, as discussed above, the elasticity e is likely to be higher for top earners than for middle incomes, possibly leading to decreasing marginal tax rates at the top (Gruber and Saez, 2002). However, the natural policy response should be to close tax avoidance opportunities, in which case the assumption of constant elasticities might be a reasonable benchmark.”
So there you are. It’s just too much to ask of our policymakers to actually make hedge fund managers pay labor taxes on their labor income, but S-D have no problem waving a magic wand and assuming away all tax loopholes.
Of course, this is perfectly good assumption from a scientific point of view, as the presence of tax loopholes has a fairly simple (albeit hard to calculate empirically) impact on a variable, e. We can easily adjust the analysis to change this later on.
I feel it is important that, to progress, we need to differentiate between assumptions, for which a relaxation has a clear mathematical impact on the analysis, and hypotheses, which themselves need to be empirically verified, and for which a relaxation causes a model to collapse completely.
Economics has come under a lot of criticism recently, and proponents sometimes try to defend themselves by pointing to Milton Friedman’s methodology of positive economics. In this essay, Friedman
loses his mind argues that the assumptions of a theory do not matter as long as its predictions are correct. Of course, even if you accept this there are still plenty of criticisms of neoclassical economics – the theories internally contradict themselves, and it is also incredibly hard to verify empirical predictions in social science, making Friedman’s litmus test somewhat of a damp squib. However, let’s put these objections to one side.
In the essay, Friedman takes us on a typical Friedman logic train to his preordained conclusion and leaves you to puzzle over how you got there. He argues that to be completely realistic a theory would have include everyone on earth’s eye colour, qualifications, etc. But how do you test whether or not to include these things? You see whether evidence corroborates the theory without them! Fantastic – assumptions don’t matter.
In this case, Friedman’s sleight of hand lies in not properly defining the word ‘assumption’. This is, in fact, so significant that it means his paper is effectively advocating any methodology whatsoever (if I assume throwing darts at this board will give me the GDP figures for next year…). The crucial characteristic of assumptions in engineering or science is that they eliminate specific variables. A perfect gas is one where many of the smaller forces between molecules are ignored. Assuming a vacuum eliminates air resistance. This gives us an appropriate method, as ‘relaxing’ an assumption means adding in more variables, and this process can continue for as long as it is practically feasible.
However, many economic assumptions could not be argued to be eliminating a certain variable - assuming that people are rational self maximisers, or that firms calculate expenditure based on marginal costs and revenue, are actually hypotheses, not ‘assumptions’ in the scientific sense of the word. As such, the ‘assumptions’ themselves are empirically falsifiable and cannot be swept under the rug.
Furthermore, in science theories are only deemed as valid as their assumptions are realistic. A theory can always be improved by making the assumptions resemble reality more accurately. So even if we were to accept Friedman’s premise, we could still improve theories by abandoning rationality based on behavioural evidence, or abandoning marginal cost based on surveys*. Unsurprisingly, in the case of widely used economic models such as Arrow-Debreu, it is incredibly difficult to relax assumptions before the theory collapses – if this were true in physics, the model would be abandoned.
To be honest, it is a sad indictment of economics that an essay which contains the passage:
The articles on both sides of the controversy [regarding marginalist analysis]…concentrate on the largely irrelevant question of whether businessmen do or do not in fact reach their decisions by consulting schedules, or curves, or multivariable functions showing marginal cost and marginal revenue.
has to be critiqued formally. A theory’s assumptions are always relevant to its conclusions, and improving them will always yield more accurate results. That much is obvious to the man on the street, but clearly not to economists.
*Friedman argues surveys are as useless as asking octogenarians how they account for their long life. I can only interpret this as him saying businesses have no idea what they are doing, which sort of undercuts him intellectually elsewhere.