Against Friedman: Why Assumptions Matter

I have previously discussed Milton Friedman’s infamous 1953 essay, ‘The Methodology of Positive Economics.’ The basic argument of Friedman’s essay is the unrealism of a theory’s assumptions should not matter; what matters are the predictions made by the theory. A truly realistic economic theory would have to incorporate so many aspects of humanity that it would be impractical or computationally impossible to do so. Hence, we must make simplifications, and cross check the models against the evidence to see if we are close enough to the truth. The internal details of the models, as long as they are consistent, are of little importance.

The essay, or some variant of it, is a fallback for economists when questioned about the assumptions of their models. Even though most economists would not endorse a strong interpretation of Friedman’s essay, I often come across the defence ‘it’s just an abstraction, all models are wrong’ if I question, say, perfect competition, utility, or equilibrium. I summarise the arguments against Friedman’s position below.

The first problem with Friedman’s stance is that it requires a rigorous, empirically driven methodology that is willing to abandon theories as soon as they are shown to be inaccurate enough. Is this really possible in economics? I recall that, during an engineering class, my lecturer introduced us to the ‘perfect gas.’ He said it was unrealistic  but showed us that it gave results accurate to 3 or 4 decimal places. Is anyone aware of econometrics papers which offer this degree of certainty and accuracy? In my opinion, the fundamental lack of accuracy inherent in social science shows that economists should be more concerned about what is actually going on inside their theories, since they are less liable to spot mistakes through pure prediction. Even if we are willing to tolerate a higher margin of error in economics, results are always contested and you can find papers claiming each issue either way.

The second problem with a ‘pure prediction’ approach to modelling is that, at any time, different theories or systems might exhibit the same behaviour, despite different underlying mechanics. That is: two different models might make the same predictions, and Friedman’s methodology has no way of dealing with this.

There are two obvious examples of this in economics. The first is the DSGE models used by central banks and economists during the ‘Great Moderation,’ which predicted the stable behaviour exhibited by the economy. However, Steve Keen’s Minsky Model also exhibits relative stability for a period, before being followed by a crash. Before the crash took place, there would have been no way of knowing which model was correct, except by looking at internal mechanics.

Another example is the Efficient Market Hypothesis. This predicts that it is hard to ‘beat the market’ – a prediction that, due to its obvious truth, partially explains the theory’s staying power. However, other theories also predict that the market will be hard to beat, either for different reasons or a combination of reasons, including some similar to those in the EMH. Again, we must do something that is anathema to Friedman: look at what is going on under the bonnet to understand which theory is correct.

The third problem is the one I initially honed in on: the vagueness of Friedman’s definition of ‘assumptions,’ and how this compares to those used in science. This found its best elucidation with the philosopher Alan Musgrave. Musgrave argued that assumptions have clear-if unspoken-definitions within science. There are negligibility assumptions, which eliminate a known variable(s) (a closed economy is a good example, because it eliminates imports/exports and capital flows). There are domain assumptions, for which the theory is only true as long as the assumption holds (oligopoly theory is only true for oligopolies).

There are then heuristic assumptions, which can be something of a ‘fudge;’ a counterfactual model of the system (firms equating MC to MR is a good example of this). However, these are often used for pedagogical purposes and dropped before too long. Insofar as they remain, they require rigorous empirical testing, which I have not seen for the MC=MR explanation of firms. Furthermore, heuristic assumptions are only used if internal mechanics cannot be identified or modeled. In the case of firms, we do know how most firms price, and it is easy to model.

The fourth problem is related to above: Friedman is misunderstanding the purpose of science. The task of science is not merely to create a ‘black box’ that gives rise to a set of predictions, but to explain phenomena: how they arise; what role each component of a system fills; how these components interact with each other. The system is always under ongoing investigation, because we always want to know what is going on under the bonnet. Whatever the efficacy of their predictions, theories are only as good as their assumptions, and relaxing an assumption is always a positive step.

Hence, the ‘it still behaves as if it matches our theories’ mentality of economists can easily be shown to be quite absurd, for example:

Consider the following theory’s superb record for prediction about when water will freeze or boil. The theory postulates that water behaves as if there were a water devil who gets angry at 32 degrees and 212 degrees Fahrenheit and alters the chemical state accordingly to ice or to steam. In a superficial sense, the water-devil theory is successful for the immediate problem at hand. But the molecular insight that water is comprised of two molecules of hydrogen and one molecule of oxygen not only led to predictive success, but also led to “better problems” (i.e., the growth of modern chemistry).

If economists want to offer lucid explanations of the economy, they are heading down the wrong path (in fact this is something employers have complained about with economics graduates: lost in theory, little to no practical knowledge).

The fifth problem is one that is specific to social sciences, one that I touched on recently: different institutional contexts can mean economies behave differently. Without an understanding of this context, and whether it matches up with the mechanics of our models, we cannot know if the model applies or not. Just because a model has proven useful in one situation or location, it doesn’t guarantee that it will useful elsewhere, as institutional differences might render it obsolete.

The final problem, less general but important, is that certain assumptions can preclude the study of certain areas. If I suggested a model of planetary collision that had one planet, you would rightly reject the model outright. Similarly, in a world with perfect information, the function of many services that rely on knowledge-data entry, lawyers and financial advisors, for example-is nullified. There is actually good reason to believe a frictionless world such as the one at the core of neoclassicism leaves the role of many firms and entrepreneurs obsolete. Hence, we must be careful about the possibility of certain assumptions invalidating the area we are studying.

In my opinion, Friedman’s essay is incoherent even on its own terms. He does not define the word ‘assumption,’ and nor does he define the word ‘prediction.’ The incoherence of the essay can be seen in Friedman’s own examples of marginalist theories of the firm. Friedman uses his new found, supposedly evidence-driven methodology as grounds for rejecting early evidence against these theories. He is able to do this because he has not defined ‘prediction,’ and so can use it in whatever way suits his preordained conclusions. But Friedman does not even offer any testable predictions for marginalist theories of the firm. In fact, he doesn’t offer any testable predictions at all.

Friedman’s essay has economists occupying a strange methodological purgatory, where they seem unreceptive to both internal critiques of their theories, and their testable predictions. This follows directly from Friedman’s ambiguous position. My position, on the other hand, is that the use and abuse of assumptions is always something of a judgment call. Part of learning how to develop, inform and reject theories is having an eye for when your model, or another’s, has done the scientific equivalent of jumping the shark. Obviously, I believe this is the case with large areas of economics, but discussing that is beyond the scope of this post. Ultimately, economists have to change their stance on assumptions if heterodox schools have any chance of persuading them.

About these ads

, , , ,

  1. #1 by Robert Nielsen on February 8, 2013 - 9:16 pm

    I agree absolutely. Assumptions do matter, often the road is as important as the destination. Where you to ask me an economic question and I simply told you the answer was 7, that would be little use to you. What is important is not so much how things are, but why they are. The motivating factor is as important as the result.

    As you pointed out, there is a huge gap between Friedman’s defence and how neo-classical economists behave. Contrary to the defence given, results are given little importance when studying a theory. So not only is perfect competition unrealistic, there has been little attempt to prove its predictions.

    The economy is far too complex with too many variables to draw easy conclusions. That’s why economists are still arguing about the Great Depression. There are dozens of explanations given as causes of our current Recession yet it is not easy to separate them out and appropriate responsibility. At its worst this can lead to correlation = causation, such as when Ron Paul claims we had a central bank and no gold standard and this will lead to economic collapse. The economy did collapse but not for the reasons he claims. However, were we to only judge him on his predictions we could not make this case.

    Assumptions and predicative abilities are not opposites that we must trade off, but rather are more likely to go hand in hand.

    • #2 by Unlearningecon on February 9, 2013 - 1:22 pm

      So not only is perfect competition unrealistic, there has been little attempt to prove its predictions.

      Which predictions does it make? Serious question.

      • #3 by Robert Nielsen on February 9, 2013 - 2:00 pm

        That the economy will be dominated by numerous price takers, none of which exert market power or have any effect on the market. The assumptions of theory are flawed as are the predictions/descriptions of how the economy should look. There as been little attempt to empirically test this theory as it would undoubtedly fail (except for in some agricultural sectors)

      • #4 by Unlearningecon on February 9, 2013 - 4:30 pm

        See, economists would retort that the price taker is an assumption rather than a consequence of the theory. This has me thinking that PC isn’t really a falsifiable model that makes predictions…it’s just a group of assumptions that are essentially the same as the conclusions.

      • #5 by Robert Nielsen on February 9, 2013 - 4:44 pm

        I suppose your right in that sense. There definitely haven’t been any serious attempts to prove it that I know of and the textbooks using fictional parables. Whether or not it could be proven or falsified is hard to know.

  2. #6 by Mick Brown on February 8, 2013 - 10:10 pm

    Re Milton Friedman Grrr You Got Me Going …Science seems to be static until another discovery comes along then it becomes static again. Is economics like that? Economists jump from one discovery to another? It’s all very well to say let’s wait for the next discovery to come along and then we’ll change our minds? but surely economics is about observing dynamically all the time not relying on a snapshot ( of course taken over a reasonable period of time according to technology available) such is my own memory of Milton Friedman himself who was described to me as a bastard by my dad who knew him lol (apart from watching him on telly ha ha!)

    • #7 by Unlearningecon on February 9, 2013 - 1:21 pm

      Yeah, sciences progresses one funeral at a time, though economics is a lot slower. Most of what is taught in micro is the same as what was taught 100 years ago. I have some faith that the neoclassical school will be displaced eventually, and perhaps the blogosphere is a part of that.

      But maybe I’m just in a bubble.

  3. #8 by Krzys on February 9, 2013 - 6:20 am

    I don’t think you actually understand how empirical science works. Your quote about how Physics predicts with great precision even with the simplest models misses the point completely. Physics can easily describe simple isolated systems, but it fails pretty miserably when dealing with highly complex ones: try your hand predicting earthquakes with all the physics you want.

    The larger point is that complex systems, especially those that offer only one historical path are extremely hard to model and systematically investigate, but it’s not because we lack good theories, but because we have too many. It’s identical to problems you find in finance modelling (which I do for a living). The solution is to use as simple and robust models as possible. Lots of structure and lots of assumption lead very quickly to wild overfits. You can tell yourself any stories you want, I am pretty sure you will find patterns to flatter your prejudices if you look hard enough: it’s a popular pastime in finance too.

    That’s why the basic methodology involves simple assumptions that you can use to predict a range of behaviors and apply as widely as possible. That’s the meaning of microfoundations. Robustness and parsimony is the name of the game. If you have a couple simple principles that can describe robustly at least some markets/interactions/aggregated regularities you are free to present it. Then and only then are you gonna have a weapon with which you can fight the neo-classical synthesis. Otherwise you are gonna keep cutting yourself on the Ockham’s razor and keep wondering why you bleed so profusely.

    • #9 by Unlearningecon on February 9, 2013 - 1:07 pm

      I don’t think you actually understand how empirical science works.

      Not always great to start on the assumption your opponent is ignorant/stupid!

      That’s why the basic methodology involves simple assumptions that you can use to predict a range of behaviors and apply as widely as possible.

      Yet, like Friedman, you still have not offered any falsifiable predictions.

      If you have a couple simple principles that can describe robustly at least some markets/interactions/aggregated regularities you are free to present it. Then and only then are you gonna have a weapon with which you can fight the neo-classical synthesis.

      There are plenty around this blog but that was not the purpose of this post. If a theory is flawed, it doesn’t matter whether or not the critic presents an alternative. The theory stands or falls on its own merits.

      Physics can easily describe simple isolated systems, but it fails pretty miserably when dealing with highly complex ones: try your hand predicting earthquakes with all the physics you want.

      There are at least four problems with this analogy. One is that we have no control over earthquakes, but we have control over the economy. The other is that future patterns in the economy are clearly heavily dependent on past patterns, whereas earthquakes follow less of a clear pattern. The third is that earthquake scientists remain modest and tentative about their conclusions, whereas economists are quite happy to parade their theories in the op-ed pages. The fourth is that models do exist which can model financial crises (Keen’s), as well as other observed phenomena in the macroeconomy (as opposed to DSGE, which can only model one thing at a time, and whose alternative models are often contradictory).

      it’s not because we lack good theories, but because we have too many.

      I agree with the latter point but not with, the former. Surely is we have too many – as, in my opinion, with DSGE models – we should opt for a more simple one? This is, after all, why epicycles were abandoned, rather than their predictive power (which was actually good). If you really believed in Occam’s Razor, then you would opt for simpler models like Keen’s.

  4. #10 by M on February 9, 2013 - 3:36 pm

    How could a financial modelling man be supposed to be taken seriously as he eventually claimed for microfoundations? is there anything more ad-hoc-like than stating that financial modelling might rest upon microfoundations? As financial institutions rely overall on Central Banking institutions, how could anyone seriously talk of the necessity of microfoundations when he, arguably, tries to make clear some point within a scientific approach altogether? Im not necesarilly referring to anyone in particular…but to the expectable stance of the financial INDUSTRY as a whole…Unless previously making clear the relations between microfoundations and Central Banking institutions, such a stance could never be taken as a sound one…

  5. #11 by Aziz on February 9, 2013 - 5:13 pm

    Excellent post.

    I think Milton Friedman and the rest of the neoclassical canon fundamentally misunderstand the notion of “predictions”. A theory’s internal mechanics are very often testable predictions in themselves. If the internal mechanics are a simplification of the real world, then that is not necessarily a problem, but if the internal mechanics are contradicted by empirical results in the real world, then clearly that is a failed prediction.

    This actually cedes into a discussion of microfoundations. The whole idea of microfounding macro theory on pure micro theory is stupid and wrong. Macro theory and micro theory should both be founded on respective empirical observation. If my theory theorises a general or partial equilibrium state, then that is in itself a testable prediction. If the real world doesn’t exhibit that characteristic, then I need a different theory.

    Economists need more philosophy of science lessons.

    • #12 by Unlearningecon on February 9, 2013 - 9:02 pm

      I’ll give Friedman credit – he explicitly said he favoured predictive power over microfoundations, so he didn’t contradict himself.

      Yeah, reductionism is dinosaur science. I can’t believe somebody ever seriously proposed that we ‘fix’ economics (in the face of evil Keynesianism) by modelling the economy as a single, utility-maximising rational agent who lives for two time periods and…eh, I can’t even be bothered to list all of the assumptions.

  6. #13 by Ramanan on February 9, 2013 - 6:35 pm

    Good post.

    Have you read Nicholas Kaldor’s “The Scourge Of Monetarism”? – it is the most devastating critique of Milton Friedman’s economics. Unfortunately – according to one of his biographers John E King – he won the battle but lost the war.

    • #14 by Unlearningecon on February 9, 2013 - 6:56 pm

      No, I really need to read that. Predicted monetarism’s failure well before it was actually tried.

      In fairness to Friedman, he did later abandon his position on the money supply. Though he never endorsed endogenous money, for obvious ideological reasons.

      • #15 by Ramanan on February 9, 2013 - 8:54 pm

        Oh didn’t know. Interesting. Any link or reference which shows the change in his position?

      • #17 by W on February 9, 2013 - 9:04 pm

        Leijonhufvud demonstrated that the endogenous vs exogenous (or inside vs outside) nature of money in a given economy rests heavily upon the (whether Central Banking, fiscal, etc.) regime within wich the economy does works.

        Keynes´s General Theory itself, which might be understood in terms of endogenous money (Liquidity Preference Theory…!!) was built up to fit the fixed international monetary standard of the late twenties; but Keynes´s Tract on Monetary reform, wrote to fit the postwar international (possibly not an explicit one after all) floating-exchange rates monetary standard.

        In a given period there might be at work, however, either kinds of money; but that would imply, analytically speaking, to separate such working elements adding up to each regime (one of them being, almost by definition, not explicit or ex-ante).

        Economics debates (whether dealing with epistemological or empirical-practical matters) are pretty often, unfortunately, brought about in terms of absolute truths, therefore worsening the general outlook of it.

        You may take not only macro itself (inside vs outside money) but indeed microeconomics: taking account monetary (macro) regimes OVER TIME, would allow to realize the macro-nature of microeconomics (say, microeconomy dynamics), instead of keeping to find a (probably misleading) microfoundation.

        The book Macroeconomics and Instability of AL is quite a good starting point to leave once and for all the quite confusing (and seemingly, also neverending) state of debate in economics in general.

      • #18 by Unlearningecon on February 10, 2013 - 9:50 pm

        Yeah, I’ve been posting a bit about institutionalism recently and I agree with you: exogenous and endogenous money is not a discussion of universal laws; we are merely trying to describe what the current monetary system looks like. Economists are only really concerned with whether arguments are valid, but not whether they are sound – this much can be seen in the endogenous versus exogenous debate. Both are logically possible, but only one is true.

      • #19 by W on February 9, 2013 - 9:12 pm

        Keynes´ Tract on Monetary Reform dealt with outside money (indeed, with the Quantity Theory, and stuff alike…).

      • #20 by W on February 9, 2013 - 9:30 pm

        In the light of that, therefore, one realizes there is no such a war (for either Kaldor) (except that of leaving the absolute truth, everlasting truths, and face the Keynes´ sentence about changing our own point of view as circumstances itself changes…)

      • #21 by Ramanan on February 9, 2013 - 10:48 pm

        Oh thanks.

        Couldn’t find the original on FT’s website but a link which has the 2003 article “Lunch with the FT: Milton Friedman” here:

        http://www.freerepublic.com/focus/f-news/937366/posts

      • #22 by W on February 10, 2013 - 11:43 pm

        Broadly speaking, whenever fixed-exchange rates are concerned, one may think of endogenous money at work (which enables, by the same token, some degree of the monetary policy-interest rate policy, to be effective) while floating-exchange rates agreements imply outside (exogenous) money (in which the quantity of money, say M1, M2, and so on, are the relevant issues and therefore, the policy main tools).

        Leijonhufvud (say, in some of the essays of Macroeconomic Instability and Coordination) put it in the following terms: price-fixing, quantity taking regimes (i.e. fixed-exchange rates) vs quantity-fixing, quantity taking (floating regimes, in which the quantity of money is announced periodically, say fixed, and the market determines the price level, the level of interest rates, etc.).

        Another broad and quite founding distinction made by AL, is that of real vs nominal shocks, on one hand, and real vs nominal propagation of them.

        For example, Keynes GT deals with an endogenous money (world) economy, and a real shock (the shift in Marginal Efficiency of Capital caused by the stock-exchange slump) and a real (relative price) propagation mechanism (that of the long-term bonds yield rates rising its levels as consequence of rising risk). In this framework, Keynes uses two price levels: the capital goods price level, and the consumption goods price level within which a sound IS-LM model ought to be constructed (see, if you will, Ingo Barens “From Keynes to Hicks: an aberration?” in Money, Markets and Method: Essays in Honour of Robert Clower. Ingo Barens also explained, in another paper, the impossibility of deriving an Aggregate Demand curve upon the a true IS-LM, which among other things, according to Keynes specifications ought to be built in terms of 2 sub-price levels, therefore disabling, so to speak, the use of a single price level), and so on.

        I may go on like this on many other subjects, but the point is clear: as one disregarded the particular contexts of given institutions, then it wont be able to make any sense of what is going on.

        AL´s work had spread over forty years (since 1968). He somewhat did his research on the relation of institutions and (macro)theories; is not easy to overemphasize what he had achieved so far.

        His work has the potential to set order relations, either in macroeconomics, in macro-policy, macroeconomics-schools´ history, and also in contemporary macro-research (for example, on the subprime crisis, a main analytical objetc may be said to be that of the lack of an explicit monetay world standard, probably being replaced “de-facto” by the balance of payments Asian policies at some degree…and so on: ergo, what macrotheory to deal with, say, the Eurozone current crisis, in so far as historically taught macrotheories responded to institutional arrangements that no longer hold? Such is the challenge for macroeconomic analysis…but, as everybody knows, you may also take the easy path of reading the newspapers and, by the way, a PK anchoring pretty arbitrarily the scenario around the fiscal expenditure level..).

      • #23 by W on February 10, 2013 - 11:57 pm

        In the seconf paragraph, i meant price-fixing, quantity taking regimes vs quantity-fixing, price-taking regimes.

  7. #24 by Matt Nolan on February 9, 2013 - 6:56 pm

    My impression was that Friedman’s essay was always used more as window dressing – not as a description of what economists do, what they should do, or what they intend to do. Ask most economists and they’ll say prediction is not a central part of the discipline, the goal is to explain, which I think is akin to what you are suggesting here.

    Fundamentally, the economics research programme is Lakatosian right, so your issue is with regards to how willing economists are to evaluate the “hard core” of the discipline.

    Generally, I’ve found academics relatively willing to discuss the hard core – and the growing amount of resources in neuroeconomics and behavioural economics illustrate that the discipline is trying to test parts of it that had previously been justified by introspection.

    I think one thing that makes economists wary is that describing a system without properly accounting for the Lucas Critique is ok – unless you what to adjust policy. Models are only policy relevant when the Lucas Critique is accounted for in some way (not doing so involves an implicit assumption), and even within economics many people sometimes forget this.

    • #25 by Unlearningecon on February 9, 2013 - 8:58 pm

      Ask most economists and they’ll say prediction is not a central part of the discipline, the goal is to explain, which I think is akin to what you are suggesting here.

      I think it would be fair to say that this is in no way true of UG economics, which really is quite insane. However, it is true of DSGE.

      Generally, I’ve found academics relatively willing to discuss the hard core – and the growing amount of resources in neuroeconomics and behavioural economics illustrate that the discipline is trying to test parts of it that had previously been justified by introspection.

      Linking to above, the problem with this approach is that you can find a DSGE paper incorporating more or less any complaint you might make: financial institutions, cognitive biases, monopoly, yada yada. However, for the purposes of evaluating each ‘anomaly,’ all other assumptions are maintained. That is: you will likely find that a discussion of, say, endogenous technological change, assumes perfect competition. There seems to be not attempt at unification.

      I think one thing that makes economists wary is that describing a system without properly accounting for the Lucas Critique is ok – unless you what to adjust policy. Models are only policy relevant when the Lucas Critique is accounted for in some way (not doing so involves an implicit assumption), and even within economics many people sometimes forget this.

      The Lucas Critique is ‘right,’ in the sense that there is always an evolving relationship between policy and the economy. However, there is no way a model can be made ‘immune’ to his critique. Microfoundations such as technology, preferences, etc. are just as vulnerable to the critique as anything else. Lucas suggests modeling on the ‘deep parameters’ of human behaviour – well, he might want to speak with an anthropologist, as these do not exist.

  8. #26 by zolltan on February 11, 2013 - 5:22 am

    Judging by the comments, people who understand economics take this post to say something about which economic theories are correct and react accordingly. Since this isn’t made explicit in the post itself and I don’t understand economics, permit me to ignore this.

    As I see it, you are highlighting two problems: that economic theories don’t offer enough testable predictions in order to discriminate between them, and that their assumptions are poorly stated and don’t conform to the real world. But I don’t see how these two claims are at all connected. If macro theories are just “explanations” rather than things that offer testable hypotheses, then that is terrible, but asking that they be explanations which sound more reasonable is not a solution to that problem. The only way to solve that problem is to offer theories that do have testable hypotheses.

    • #27 by Unlearningecon on February 11, 2013 - 1:11 pm

      It’s not a choice between the two. What I am saying is nicely summarised by Aziz’s comment: the internal mechanics of models themselves are subject to falsification. If we know how a system works, and can model it satisfactorily, then why would be maintain a counterfactual assumption about its nature? The only possible answer is for use as a pedagogical tool to be a dropped later, or because it makes highly accurate predictions that are better than those made by the alternative theory.

      The former does not apply as the overwhelming majority of the time, ‘simplifying’ assumptions in economics are unnecessarily convoluted mathematical naval gazing. The latter is occasionally the case in science but that degree of accuracy is not possible in economics so it is just unscientific to maintain the assumption.

  9. #28 by Will on February 11, 2013 - 6:37 am

    Unlearning Econ:

    I’m replying to your latest tweet (Guardian article advocating a land value tax). I don’t have a twitter and can’t say anything in fewer than 140 characters.

    I’ve been persuaded of the soundness of the LVT within a context of industrial capitalism ever since I read Henry George. It is an obscure fact that Marx, in his Notes on James Mill, describes the LVT as the revenue source most in harmony with capitalism — this is a far cry from the article’s description of the tax as “radical”, and also shows us why Marx never pushed for it. My concern is that, as per Marx, the LVT is a panacea promises too much. It cannot solve deeper problems such the of competitors doing redundant work and then finding no seller, nor the counterprouctive system of punishments and rewards meted out by the wage system. There is also J.K. Galbraith’s argument about the political impracticality of getting elites to agree to an LVT, when they will agree to union rights, regulations, and pro-labor interpretions of civil law. On the other hand, there is also the argument — originally in John Locke — that all taxes ultimately are absorbed by land, whatever the more immediate object. How do you come down on this stuff?

  10. #29 by Magpie on February 11, 2013 - 9:21 am

    “The first is the DSGE models used by central banks and economists during the ‘Great Moderation‘ predicted stable behaviour, and the economy exhibited this. However, Steve Keen’s Minsky Model also exhibits relative stability for a period, before being followed by a crash”.

    This ties very well with your point that prediction is not the only function of science, but understanding.

    What Friedman (or at least economists following him) seems to justify is a methodology of being predictively right, even if it’s for the wrong reasons.

    But if one happens to be right for the wrong reasons, how can one know whether the circumstances are about to change? How does one know it’s not a fluke?

    Further, if one is right, but one doesn’t really know the reason, how can one propose policy?

    • #30 by Unlearningecon on February 16, 2013 - 6:25 pm

      Yeah, exactly. It’s almost like a correlation-causation problem: we have a model that appears to corroborate with the evidence, but we don’t know what’s going on inside it.

      Obviously, economists are willing to make prescriptions based on their theories without evidence. For example Walrasian equilibrium is used to suggest completely randomised, one off lump sum taxation to keep the economy ‘efficient.’ The idea that a theory as unrealistic as that can be used to suggest anything in the real world is ridiculous.

  1. Why Friedman’s methodology has “jumped the shark” « LARS P SYLL
  2. The problem with macro-econ is that experiment isn’t advancing enough | Rated Zed
  3. On Assumptions and Theories | Mike the Mad Biologist
  4. Basic Overview of Utility Theory « Some Random Marxist
  5. Friday links: new small-school ecology blog, job skills advice for grad students, #sciencepickup lines, Dawkins for Pope odds, and more | Dynamic Ecology
  6. El liberalismo como enfermedad mental (2): la práctica « Apunts sobre l'abisme
  7. El liberalismo como enfermedad mental (2): la práctica | Club Pobrelberg
  8. The Assumptions Economists Make | Pearltrees
  9. An FAQ for Mainstream/Neoclassical Economists | Unlearning Economics
Follow

Get every new post delivered to your Inbox.

Join 958 other followers