Misinterpretations in Mainstream Economics

It is my opinion that major areas of neoclassical economics rest on misinterpretations of original texts. Though new ideas are regularly recognised as important and incorporated into the mainstream framework, this framework is fairly rigid: models must be micro founded, agents must be optimising, and – particularly in the case of undergraduate economics – the model can be represented as two intersecting curves. The result is that the concepts that certain thinkers were trying to elucidate get taken out of context, contorted, and misunderstood. There are many instances of this, but I will illustrate the problem with three major examples: John Maynard Keynes, John Von Neumann and William Phillips.

Keynes, in two lines

It is common trope to suggest that John HicksIS/LM interpretation of Keynes’ General Theory was wrong. It is also true, and this was acknowledged by Hicks himself over 40 years after his original article.

IS/LM, or something like it, was being developed apart from Keynes by Dennis Robertson, Hicks and others during the 1920s/30s, who sought to understand interest rates and investment in terms of neoclassical equilibrium. Hence, Hicks tried to annex Keynes into this framework (they both, confusingly, called neoclassicals ‘classicals’). Keynes’ theory was reduced to two intersecting lines that looked a lot like demand-supply. The two schedules were derived from the equilibrium points of the demand and supply for money (LM), and the equilibrium points of the demand and supply for goods and services (IS). In order to reach ‘full employment’ equilibrium, the central bank could increase the money supply, or the government could expand fiscal policy. Unfortunately, such a glib interpretation of Keynes is flawed for a number of reasons:

First, Keynes did not believe that the central bank had control over the money supply:

…an investment decision (Prof. Ohlin’s investment ex-ante) may sometimes involve a temporary demand for money before it is carried out, quite distinct from the demand for active balances which will arise as a result of the investment activity whilst it is going on. This demand may arise in the following way.

Planned investment—i.e. investment ex-ante—may have to secure its ” financial provision ” before the investment takes place…There has, therefore, to be a technique to bridge this gap between the time when the decision to invest is taken and the time when the correlative investment and saving actually occur. This service may be provided either by the new issue market or by the banks;—which it is, makes no difference.

Since Hick’s model relies on a ‘loanable funds’ theory of money, where the interest rate equates savings with investment and the central bank controls the money supply, it clearly doesn’t apply in Keynes’ world. An attempt to apply endogenous money top IS/LM will result in absurdities: an increase in loan-financed investment, part of the IS curve, will create expansion in M, part of the LM curve. Likewise, M will adjust downwards as economic activity winds down. So the two curves cannot move independently, which violates a key assumption of this type of analysis.

Second, Keynes did not believe the interest rate had simple, linear effects on investment:

I see no reason to be in the slightest degree doubtful about the initiating causes of the slump….The leading characteristic was an extraordinary willingness to borrow money for the purposes of new real investment at very high rates of interest.


But over and above this it is an essential characteristic of the boom that investments which will in fact yield, say, 2 per cent. in conditions of full employment are made in the expectation of a yield of, say, 6 per cent., and are valued accordingly. When the disillusion comes, this expectation is replaced by a contrary “error of pessimism”, with the result that the investments, which would in fact yield 2 per cent. in conditions of full employment, are expected to yield less than nothing…

…A boom is a situation in which over-optimism triumphs over a rate of interest which, in a cooler light, would be seen to be excessive.

So, again, the simple, mechanistic adjustments in IS/LM are inaccurate. The magnitude of the interest rate will not change just the level, but also the type of investment taking place. Higher rates increases speculation and destabilise the economy, whereas low rates encourage real capital formation. This key link between bubbles, the financial sector and the real economy was lost in IS/LM, and also in neoclassical economics as a whole.

Third – and this is something I have spoken about before – Hicks glossed over Keynes’ use of the concept of irreducible uncertainty, which was key to his theory. The result was a contradiction, something Hicks noted in the aforementioned ‘explanation’ for IS/LM. The demand for money was, for Keynes, a direct result of uncertainty, and in a time period sufficient to produce uncertainty (such as Keynes’ suggested 1 year), expectations would be constantly shifting. Since both the demand for money, savings and investment depended on expectations, the curves would be moving interdependently, undermining the analysis. On the other hand, in a time period short enough to hold expectations ‘constant’ and hence avoid this (Hicks suggested a week), there would be no uncertainty, no liquidity preference and therefore no LM curve.

Hicks’ attempt to shoehorn Keynes’ book into his pre-constructed framework led to oversimplifications and a contradiction, and obscured one of Keynes’ key insights: that permanently low long term interest rates are required to achieve full employment. The result is that Keynes has been reduced to ‘stimulus,’ whether fiscal or monetary, in downturns, and the reasons for the success of his policies post-WW2 are forgotten.

Phillips and his curve

Another key aspect-along with IS/LM-of the post-WW2 ‘Keynesian’ synthesis was the ‘Phillips Curve,’ an inverse relationship between inflation and unemployment observed by Phillips in 1958. Neoclassical economists reduced this to the suggestion that there was a simple trade-off between inflation and unemployment, and policymakers could choose where to select on the Phillips Curve, depending on circumstances.

Predictably, this is not really what Phillips had in mind. What he observed was not ‘inflation and unemployment,’ but inflation and money wages. Furthermore, it was not a static trade off, but a dynamic process that occurred over the course of the business cycle. During the slump, society would observe high unemployment and low inflation; in the boom, low unemployment would accompany high inflation. This is why, if you look at the diagrams in his original paper, Phillips has numbered his points and joined them all together – he is interested in the time path of the economy, not just a simple mechanistic relationship. The basic correlation between wages and unemployment was just a starting point.

Contrary to what those who misinterpreted him believed, Phillips was not unaware of the influence of expectations and the trajectory of the economy on the variables he was discussing; in fact, it was an important pillar of his analysis:

There is also a clear tendency for the rate of change of money wage rates at any given level of unemployment to be above the average for that level of unemployment when unemployment is decreasing during the upswing of a trade cycle and to be below the average for that level of unemployment when unemployment is increasing during the downswing of a trade cycle…

…the rate of change of money wage rates can be explained by the level of unemployment and the rate of change of unemployment.

Finally, whatever Phillips’ theoretical conclusions, it is clear he did not intend even a correctly interpreted version of his work to be the foundation of macroeconomics:

These conclusions are of course tentative. There is need for much more detailed research into the relations between unemployment, wage rates, prices and productivity.

Had neoclassical economists interpreted Phillips correctly, they would have seen that he thought dynamics and expectations were important (he was, after all, an engineer), and we wouldn’t have been driven back to the stone age with the supposedrevolution‘ of the 1970s.

An irrational approach to Von Neumann

In microeconomics, the approach to ‘uncertainty’ (a misnomer) emphasise the trade-off between potential risks and their respective payoffs. Typically, you will see a graph that looks something like the following (if you aren’t a mathematician, don’t be put off – it’s just arithmetic):

Candidate Probability Home Abroad
A 0.6 300k 200k
B 0.4 100k 200k

The question is whether a company will invest at home or abroad. There is an election coming up, and one candidate (B) is an evil socialist who will raise taxes, while the other one (A) is a capitalist hero who will lower them. Hence, the payoffs for the investment will differ drastically based on which candidate wins. Abroad, however, there is no election, and the payoff is certain in either case; the outcome of the domestic election is irrelevant.

The neoclassical ‘expected utility’ approach is to multiply the relative payoffs by the respective probability of them happening, to get the ‘expected’ or ‘average’ payoff of each action. So you get:

For investing abroad: £200k, regardless

For investing at home: (0.6 x £300k) + (0.4 x £100k) = £220k

Note: I am assuming the utility is simply equal to the payoff for simplicity. Changing the function can change the decision rule but the same problem – that what is rational for repeated decisions can seem irrational for one – will still apply.

So investing at home is preferred. Supposedly, this is the ‘rational’ way of calculating such payoffs. But a quick glance will reveal this approach to be questionable at best. Would a company make a one off investment with such uncertain returns? How would they secure funding? Surely they’d put off the investment until the election, or go with the abroad option, which is far more reliable?

So what caused neoclassical economists to rely on this incorrect definition of ‘rationality’? A misinterpretation, of course! One need look no further than Von Neumann’s original writings to see that he only thought his analysis would apply to repeated experiments:

Probability has often been visualized as a subjective concept more or less in the nature of estimation. Since we propose to use it in constructing an individual, numerical estimation of utility, the above view of probability would not serve our purpose. The simplest procedure is, therefore, to insist upon the alternative, perfectly well founded interpretation of probability as frequency in long runs.

Such an approach makes sense – if the payoffs have time to average out, then an agent will choose one which is, on average, the best. But in the short term it is not a rational strategy: agents will look for certainty; minimise losses; discount probabilities that are too low, no matter how high the potential payoff. This is indeed the behaviour people demonstrate in experiments, the results of which neoclassical economists regard as ‘paradoxes.’ A correct understanding of probability reveals that they are anything but.

Getting it right

There are surely many more examples of misinterpretations leading to problems: Paul Krugman’s hatchet job on Hyman Minsky, which completely missed out endogenous money and hence the point, was a great example. The development economist Evsey Domar reportedly regretted creating his model, which was not supposed to be an explanation for long run growth but was used for it nonetheless. Similarly, Arthur Lewis lamented the misguided criticisms thrown at his model based on misreadings of misreadings, and naive attempts to emphasise the neoclassical section of his paper, which he deemed unimportant.

This is not to say we should blindly follow whatever a particularly great thinker had to say. However, indifference toward the ‘true message’ of someone’s work is bound to cause problems. By plucking various thinker’s concepts out of the air and fitting them together inside your own framework, you are bound to miss the point, or worse, contradict yourself. Often a particular thinker’s framework must be seen as a whole if one is truly to understand their perspective and its implications. Perhaps, had neoclassical economists been more careful about this, they wouldn’t have dropped key insights from the past.


, , , ,

  1. #1 by Anon @ 12 on February 20, 2013 - 12:07 am

    That Von Neumann section is horribly mangled to the point of laughability.

    “The neoclassical ‘expected utility’ approach is to multiply the relative payoffs by the respective probability of them happening, to get the ‘expected’ or ‘average’ payoff of each action.”

    No, it’s to do that, but with the utility of payoffs (if the payoffs are monetary) rather than the payoff itself. This means that the rational choice for the company can be either option! If the company is sufficiently risk averse (that is, their utility has sufficient curvature), then the rational choice is investing abroad, even though the expected payoff is lower. This is ‘agents looking for certainty’, and it certainly isn’t a violation of rationality.

    “Would a company make a one off investment with such uncertain returns? How would they secure funding? Surely they’d put off the investment until the election, or go with the abroad option, which is far more reliable?”

    You’ve set up the situation so that there’s only two options. Don’t complain that including metacognition, or turning it into a repeated game, would make more sense when _the game is just set up that way_. It’s not like you couldn’t allow for these things in a game–you just didn’t.

    The Allais paradox is a ‘paradox’, by the way, because the choice of 1A and 2B is inconsistent no matter what the chooser’s utility function is, and it’s only feasible as a paradox because there are two separate gambles.

    I get that you’re dedicated to tearing down that big, bad, evil economics no matter how well you actually understand it, but are you really so uninformed that this section wasn’t disingenuous?

    • #2 by Unlearningecon on February 20, 2013 - 12:39 am

      I thought it would be obvious that I was setting utility equal to the payoff. You can change the utility function if you want; it simply moves the problem.

      You manage to make a half decent point here:

      You’ve set up the situation so that there’s only two options. Don’t complain that including metacognition, or turning it into a repeated game, would make more sense when _the game is just set up that way_. It’s not like you couldn’t allow for these things in a game–you just didn’t.

      Yeah, maybe introducing new elements was bogus. However the basic point still stands: the expected utility approach only really makes sense with repeated experiments. The Alias paradox shows that, for many, the ‘rational’ approach involves calculations completely different to EU theory if the game is played only once.

      It’s quite telling that you pounce on this as if it’s a massive ‘gotcha,’ when in reality it’s nothing of the sort. What’s even more telling about your attitude is this nice little straw man you’ve concocted:

      I get that you’re dedicated to tearing down that big, bad, evil economics

      which is entirely your creation. I’ve never used words to that effect.

  2. #3 by W on February 20, 2013 - 4:25 am

    There are several things which require a rigoruos math knowledge upon which a sound use of the Rationality concept could be justified within the context of a probability and game-theoretic framework like this one.

    Is the 2×2 pay-off matrix anything but a couple of discrete binomial chance variables, say X and Y? Therefore, a bidimensional discrete chance variable?

    In the case of a nxn pay-off matrix, the previous would indeed still hold but for a multinomial discrete chance variable for each player, and a n-dimensional discrete chance variable regarding the game?

    Take now the repeated-game theoretic framework: what about the issue of the convergence of a sequence of discrete chance variables into a continuous one?

    What about the issue of the joint-probability distribution regarding a multi-dimensional chance variable?

    What about the issue of the Moments (among others, the conditional Expectations) of such chance variables, on one hand, and the issue of the covariance-correlation of (the limit of) the multi-dimensional chance variable embedding one-dimensional chance variables?

    Is there really a rational approach within the method of defining Utility functions AS Conditional Probability Distributions in a context of multi-dimensional chance variables (and Joint-Probability Distributions) regardless of issues such as those related to Norm Convergence, Measure-Convergence, etc.?

    Assume the existence of such Conditional Moments related to Conditional Probability Distributions within a Joint-Probability Distribution of a n-dimensional chance variable: then you might speak of each player´s Expected Utility in the limit (asymptotically, so to speak). Is such an EU anything but realized (average) utility? How could anyone rationally speak of Rationality in a context (the limiting process) having a realization as it´s outcome?

    In the light of all that, one might ask: is mainstream economics mad?

    • #4 by Unlearningecon on February 20, 2013 - 9:47 pm

      This mathematical talk is over my head, but I read you as – among other things- suggesting the what is required for the EU approach to hold exactly is for the number of games to tend to infinity. This is indeed correct. This is the reason that people have evolved to use heuristics and tools completely different to the EU approach.

      • #5 by W on February 20, 2013 - 11:59 pm

        Its over my head as well! But is not more than a rather strict use of the definitions involved in probability theory (whether you were acquainted with it or not…): since a first order moment is involved (the Expected Utility), that implies a chance variable; as long as there are 2 players instead of only 1, that implies a joint-probability distribution…and so on…each of these mathematical objects consisting in some kind of limit in a limiting process…topology, etc….turns the definition of Rationality in this context to be somewhat vain…
        I personally see (it would be great to have the word of a mathematician, naturally…!) the EU approach as based upon almost the same math involved in square-regression analysis: if the game is going to be repeated infinitely, then each outcome could be taken as a size-1 sample; then it could be forecasted at some degree whether the choices to be made by each player might be correlated or not to each other…and so on…
        Yet, what is the sense of speaking of Rationality in such a context remains obscure to me as it were if speaking of Rationality in the context of square-regression analysis…
        I cant help but to ponder the degree at which economics have evolved into a set of tenets, as i get the point that economists are generally blind about the math they are supposed to deal with…

      • #6 by Unlearningecon on February 23, 2013 - 1:36 pm

        turns the definition of Rationality in this context to be somewhat vain…

        It is indeed vague. ‘Rational’ often seems to mean ‘the behaviour of an agent in neoclassical theory. Just like a ‘rent’ is basically ‘a payoff economists don’t like.’

        Anyway, thanks for your comments.

  3. #7 by Roman P. on February 20, 2013 - 6:02 am

    Followers often reduce brilliant insights to the level of idiocy. I remember a paper where some Chinese economist rehashed Arrow-Debreu and concluded that there could be no unemployment in the world for the markets clear. A watertight result, you understand.

    Still, I’d like to discuss the idea of Keynes (and Adam Smith, too) that high interest rates lead to the reckless gambling schemes. Their reasoning seems right on some level, but you could imagine reasons why low interest rates may drive people to gamble as well: with cheaper credit, it becomes easier to finance the financial speculation. I don’t think that the prime interest rate in USA was that high in the noughties either. http://www.fedprimerate.com/wall_street_journal_prime_rate_history.htm
    I think that there is no clear relationship between the interest rates and the level of the financial instability: speculators and serious investors just endogenously supersede each other.

    • #8 by Unlearningecon on February 20, 2013 - 12:01 pm

      I’m not sure what level you have in mind, but Keynes was aiming for about 2.5% while Adam Smith thought the cap should be 5%. From the data you give, the long term rates were mostly above both of these. Keynes even thought 2.5% was high enough to produce sub par performance:

      There is, surely, overwhelming evidence that even the present reduced rate of 3½ per cent on long-term gilt-edged stocks is far above the equilibrium level.

      It is also worth stressing the importance of expectations: a monetary authority who is committed to keeping rates low will have a different impact on investment than if the rate happens to fall to a certain level and they are unsure of where it will go next.

      Their reasoning seems right on some level, but you could imagine reasons why low interest rates may drive people to gamble as well: with cheaper credit, it becomes easier to finance the financial speculation.

      This is a valid argument, particularly with variable rate mortgages and the base rate (which, incidentally, Keynes also had kept at 2% for the 30s and most of the 40s). However, I think this is a regulatory problem that could exist at every rate of interest: the money should not go into asset speculation whatsoever.

      • #9 by Roman P. on February 20, 2013 - 12:35 pm

        I don’t think there is much correlation from the data on the prime interest rates between the rate and stock asset bubbles (Dotcom boom, for example, occured at much lower rates than in the 80’s). So I’d say there is no mechanical relationship between the interest rate and financial speculation.

        I don’t believe there are ways to prevent asset speculation in the modern economy. As long as there people willing to speculate, even legal arrangements aren’t omnipotent.

      • #10 by Unlearningecon on February 20, 2013 - 8:35 pm

        I think you’re wrong, and I think I’ll do a post on it!

  4. #11 by Blue Aurora on February 21, 2013 - 10:49 am

    John von Neumann aside…

    So far Unlearningecon, and to the best of my knowledge, there has been no Post-Keynesian critique of the axioms of Subjective Expected Utility, a brainchild of Bruno de Finetti, Frank P. Ramsey, and Leonard J. Savage. At best, there are only citations of articles from the decision theory literature, particularly Allais (1953) and Ellsberg (1961). I have yet to see a formidable critique by a Post-Keynesian economist that doesn’t just fall back on either of these two journal articles, but instead surpasses them.

    As for John Maynard Keynes’s attitude toward Sir John R. Hicks’s IS/LM diagram…I believe that Robert Skidelsky suggests in The Economist as Saviour (Volume II of the Skidelsky trilogy of Keynes’s life) that Keynes just went along with IS/LM because he thought it would be a useful compromise.

    • #12 by Unlearningecon on February 23, 2013 - 1:38 pm

      As for John Maynard Keynes’s attitude toward Sir John R. Hicks’s IS/LM diagram…I believe that Robert Skidelsky suggests in The Economist as Saviour (Volume II of the Skidelsky trilogy of Keynes’s life) that Keynes just went along with IS/LM because he thought it would be a useful compromise.

      As I have remarked before, Keynes was politically savvy and preferred to fudge things if it meant his policies would get applied. It worked for the short term but in the long term it just means his lessons weren’t communicated well enough (insert snarky reference to ‘in the long run’ quote).

      • #13 by Blue Aurora on February 27, 2013 - 7:44 am

        Fair enough. I will concede that he didn’t communicate his opinion on the use of IS/LM clearly enough. If he had explicitly stated that it’s an “okay first approximation”, then perhaps we wouldn’t be debating over IS/LM as of 2013.

        However, you still have not responded to my first point.

      • #14 by Unlearningecon on February 28, 2013 - 8:40 pm

        In truth I’m not familiar enough with said literature to give you a worthwhile opinion.

      • #15 by Blue Aurora on March 1, 2013 - 8:22 am

        Fair enough.

        So when will you get around to fixing that, by reading Bruno de Finetti’s work on subjective probability, Frank P. Ramsey’s writings on subjective probability, and Leonard J. Savage’s The Foundations of Statistics (1954)?

        Dr. Michael Emmett Brady’s main problem with the state of economics as a whole is the failure to replace Subjective Expected Utility decision theory with a more general, more comprehensive one.

        With the axioms of S.E.U. consistently being violated in many replicated experiments by many researchers looking into the Allais paradox and the Ellsberg paradox, it’s becoming increasingly obvious that the prescriptive, predictive, and explanatory power(s) of S.E.U. in the real world is at best, highly limited to special cases.

        While Itzhak Gilboa and David Schmeidler have implemented something more comprehensive in the form of “Choquet integration”, so far, a more general theory has yet to replace the implicit S.E.U. foundation upon which much of economics is based upon.

        (IIRC, I believe Dr. Brady has stated that although Paul Krugman, Brad DeLong, Joseph Stiglitz, and Christina Romer are Keynesians, they all suffer the same problem James Tobin had: adherence and acceptance of S.E.U. as basic. In his view, that’s part of the reason why the Keynesians still haven’t gotten the upper-hand over the academic consensus in economics.)

  5. #16 by Boatwright on February 21, 2013 - 12:07 pm

    You mention Krugman’s “hatchet job” of Minsky.

    As a rank amateur with a bit of math in my background, I have been following the Minsky/Keen vs. the dragon battle with interest. It is clear that Keen’s work adding the techniques that engineering mathematics uses to model the behavior of multi-variable systems is very productive. The ability of Keen’s models to produce curves that closely match the real world data is remarkable.

    Most impressive is the appearance of time functions that yield chaotic cyclic results, attractors, fractals, etc.. On reflection, it is a slap to the forehead to see economic models that are beginning to see the economic world as it is: a chaotic, multi-variable, cyclical can of worms.

    Keen is the dragon slayer when it comes to the belief of the neo-classicals that the simple algebraic, and first order derivative functions used to project basic micro-economic concepts onto analyzing the macro-economy are sufficient to the task. In spite of the howling, it is clear that his methods are pointing the way to a new paradigm.

    Keen’s methods often seem heurisitc. However, so what? Considering the teleological, ad-hoc, and ad-hominem quality of the neo-classical response to his evolving models, Keen and his follower’s insights and experimental methods seem likely to continue producing defensible results.

    • #17 by Unlearningecon on February 23, 2013 - 1:34 pm

      Yeah, economists reject Keen’s models for reasons that, if they were applied to their own models, would surely invalidate them. The close fit to the data alone should be enough for a science supposedly committed to prediction and falsification to take note.

  6. #18 by Magpie on February 22, 2013 - 8:17 am

    Your Neumann illustrative example was indeed very unfortunate, but not just for the reasons already discussed and which, to your credit, you acknowledged.

    It was unfortunate, too, because it provided a distraction which your critics were quick to exploit, as was the case of comment #1.

    My reading of your post, regarding Neumann, is that he himself considered a relative-frequency interpretation of probability (as opposed to a subjective probability interpretation) as necessary to his theory.

    (Incidentally, if memory serves, John Nash also included some similar comments in his thesis).

    By opportunistically making a big fuss about a marginal aspect (your example), your critic managed to take the focus away from what is the substance of your post (Neumann’s quote), and to cast doubt on your understanding of the matter.

    It may be unfair, your critic may have been intellectually dishonest, but this is how things are.

    A suggestion: cite the source of Neumann’s quote. Good luck.

    • #19 by Unlearningecon on February 23, 2013 - 1:32 pm

      Yeah, a throwaway mistake in a blog post is apparently enough evidence that I don’t under economics, while the myriad of errors in neoclassicism can apparently be ignored because they are fixed later (they aren’t) or ‘it’s just an abstraction.’

      Anyway, I’ve added the source and a note.

  7. #20 by commenter on February 23, 2013 - 5:43 am

    I think you’ve misinterpreted von Neumann’s quote there. The quote appears to be a reference to the Bayesian vs. frequentist interpretations of probability, and has nothing to do with repeated games (although it is hard to be certain given the lack of a citation).

    • #21 by Unlearningecon on February 23, 2013 - 1:26 pm

      It’s from his book, in context I don’t think it is referencing the Bayesian/Frequentist distinctions.

      • #22 by commenter on February 23, 2013 - 6:36 pm

        2 paragraphs above your quote he explicitly states that he is considering choices made at a single point in time. He explicitly states that he is not considering repeated decisions.

        I think, rather than referring to the Bayesian/frequentist issue, he is referring to the objective vs. subjective notions of probability. At around the same time Savage was writing in favour of a subjective notion of probability (that allows us to deal with ambiguity as well as uncertainty) and it seems the von Neumann was simply pointing out that he views his utility framework as being based on objective probabilities.

      • #23 by Unlearningecon on February 23, 2013 - 7:56 pm

        Hmm, that’s not how I read it. What I read is that he is considering “events” (he uses the plural), but for simplification is condensing them down to one.

        (In any case, note that I think the E(u) approach is only valid for repeated experiments no matter what Neumann said.)

      • #24 by W on February 23, 2013 - 8:13 pm

        (this comment is, obviously, an interpolation in a third parties´ exchange…) Lets supponse Von Neumann were talking of a choice made at a single point in time: then the Expectation is meant to be a chance variable in itself? Say, if the Expectation is not based upon averaging of previous outcomes…what might be yet meant to be? I mean, if Expectations aren´t based upon previously realized outcomes (or to be more precise: upon a probabilistic framework allowing to identify E with a function of previously realized outcomes), that makes the probabilistic framework specification of the game to be unknown? Thanks…!

      • #25 by commenter on February 23, 2013 - 8:53 pm

        The “events” are the A, B and C.
        Alternatively, if you work through the axiomatisation it is clear that repetition is not necessary.

        More telling, I think, is your assertion that if the ‘classics’ disagree with current theory then the current theory is wrong. But if the ‘classics’ disagree with you then the classics are wrong.

      • #26 by Unlearningecon on February 23, 2013 - 9:18 pm

        I am willing to rethink whether or not Von Neumann meant it for individual or repeated experiments. I’d like to see some quotes from him in another context.

        More telling, I think, is your assertion that if the ‘classics’ disagree with current theory then the current theory is wrong. But if the ‘classics’ disagree with you then the classics are wrong.

        This is an oversimplifcation and a straw man. I read the Neumann’s position and thought [what I interpreted it as saying] was reasonable. I still think that position is reasonable regardless of what Neumann meant.

      • #27 by W on February 23, 2013 - 10:23 pm

        Commenter: So shall we say that the empiricall relevance of a theoretic framework is undermined from the very beginning by means of axiomatization, which in its turn allows us to get the very outcome of a “game”?
        By the way: what do you mean by axiomatization (are you talking of axiomatization in the sense of Set Theory, for instance; are you talking of a specific axiomatization or just in a broad sense, namely…unspecified one?) ? Thanks!

      • #28 by commenter on February 24, 2013 - 1:56 am


        The main contribution of von Neumann is von Nuemann and Morgenstern’s representation theorem. What this theorem says is that an individuals preferences may be represented by an expected utility function if, and only if, the individuals preferences satisfy the following four axioms:

        1. Completeness
        2. Transitivity
        3. Independence
        4. Continuity
        (see http://en.wikipedia.org/wiki/Expected_utility_hypothesis#The_von_Neumann-Morgenstern_axioms for more details)

        This is a very powerful theorem. It means that if you want to argue against expected utility then you need to identify which axiom, and under which conditions, fails.

        In experiments, subjects often fail Independence and sometimes fail transitivity (there are many variations of vN-M axioms that try to reconcile observed behaviour, with various levels of success).

        However, this doesn’t mean that vN-M is a bad theory – it is still very useful as a normative theory of choice (and as a positive theory of choice if we are in certain environments).

        The distinction between normative and positive economics is very important if you wish to understand decision theory – some theories describe how we *should* make choices, some theories describe how we *do* make choices and some try to do both. [This is a point that unlearning doesn’t seem to acknowledge.]

      • #29 by W on February 24, 2013 - 6:00 am

        “if you want to argue against expected utility then you need to identify which axiom, and under which conditions, fails.”

        If the conditions under which the Von Neumann theory of expected utility fail are to be discovered, and (therefore) the conditions under which the VN theory of expected utility works are to be discovered, then the question to be made is: how can anyone (if possible, within an episthemological background) make for sure that the VN theory is a (scientific, lets say) theory at all (in the sense of episthemology)?

        You might say “VN´s is NOT a THEORY, BUT a THEOREM: a theorem on the REPRESENTATION of individual utility by means of Expected Utility”.

        Then we may think a little and reach to the following question: what has Economics to do (or Economic Theory, if you like) with THEOREMS?

        And even more, to the following question: What has Economics to do with THEOREMS ON REPRESENTATION?

        REPRESENTATION THEOREMS are theoretical tools in Functional Analysis (a branch of mathematics related with many other branches, such as: Real Analysis, General Topology, Operator Theory, and so on…)?

        So to speak and for instance, the VN´S Independence Axiom resembles very much the Convolution Property for Distributions (Generalized Functions).

        There is only one possible application of this stuff, and you have already given it: NORMATIVE ECONOMICS; that is, a branch of theory that has no relation to either macroeconomics, positive microeconomics, etc.

        VN´s is not an economic theory; is just a REPRESENTATION THEOREM of Functional Analysis quite abstracted from positive phenomenona. By some device, however, it seems to fit with the Psychology research-object: conscience, thought patterns, and the like.

      • #30 by Unlearningecon on February 26, 2013 - 1:26 pm

        I find curious the idea that expected utility is a normative framework. Personally, I would suggest that it is the wrong way to analyse expected payoffs, for reasons of minimizing loss, eliminating highly improbable events no matter how high the payoff etc.

        Insofar as the axioms are concerned, Neumann’s theorem remains a static one. Even if the axioms are fulfilled, an observed phenomenon like time inconsistency is enough to render the concept irrelevant for all practical purposes.

  8. #31 by phil on February 23, 2013 - 10:06 pm

    Hi Unlearning,

    “First, Keynes did not believe that the central bank had control over the money supply:”

    I don’t see how those quotes demonstrate this. Isn’t Keynes just saying (in those quotes) that an investor can get money by borrowing it from banks or by issuing shares – before he makes his investment?

    What am I missing?

    • #32 by Unlearningecon on February 23, 2013 - 10:22 pm

      I think it could be interpreted in a number of ways but if you read that paper it seems Keynes is thinking in terms of endogenous money, as he talks of banks “controlling the money supply.”

      • #33 by phil on February 23, 2013 - 10:47 pm

        Banks “controlling the money supply” could just mean that their lending decisions (etc) determine the broad money supply, but it doesn’t necessarily mean that he was talking about banks ‘creating money’, does it?

      • #34 by Unlearningecon on February 23, 2013 - 11:53 pm

        Perhaps I am projecting a bit. Its is my opinion that Keynes was aware of, and flirted with, endogenous money, but he made concessions to the mainstream in order to communicate his point. See here:

        It is, therefore, on the effect of a falling wage- and price-level on the demand for money that those who believe in the self-adjusting quality of the economic system must rest the weight of their argument; though I am not aware that they have done so. If the quantity of money is itself a function of the wage- and price-level, there is indeed, nothing to hope in this direction.

        I think Keynes’ arguments were all strengthened by endogenous money; he seems to be aware of it throughout his texts; he also makes comments that could be interpreted as in favour of endogenous money. However I am willing to concede that we cannot be certain what he meant.

        PS it is worth noting that Keynes’ remarks on their own are sufficient to throw doubt on the workings of IS/LM in the same way endogenous money would.

      • #35 by W on February 24, 2013 - 4:17 am

        I think that an endogenous nature (a regime-dependent endogenous money, lets say) of money would lead to the sort of dilemma concerning the distinction between the IS and LM curves that you already pointed at (take my comment of 11.35; which points to the “money-side” of Fiscal Policy: say, within a Liquidity Preference situation, its ONLY the fiscal expenditure that has some control over the money-supply. By providing a sort of relative-price/risk anchor, it provides the market with a device to bound the risk-premium behavior and therefore enabling money-supply to let go…and so on).

        The interdependence of both curves is quite clear in that context: still there remains the possibility of a truly dynamic formulation of the ISLM model, taking account of that interdependence…i would say…

  9. #36 by W on February 23, 2013 - 11:35 pm

    Half (namely, from the Treatise-where he started talking of “Treasury View”…, onwards: that related with the GT, the Bretton Woods architecture, etc.) of ALL Keynes´s work may be summarized with the sentence: “Keynes did not believe that the Central Bank had control over the money supply”.

    In short: as long as the Central Bank has no control over the (endogenous) money supply, THERE REMAINS the fiscal policy to control it: this is an alternative way of stating the very purpose of fiscal policy. (Instead of making a link to employment, etc., it may be straightforwardly linked to money supply behaviour!).

    Add that an international dimension: for instance, stating that the (in)ability of a particular Central Bank to control the money supply depends, through exchange-rates arrangements, to other Central Banks (in)abilities, and you would also grasp easily that fiscal policy has, in Keynes´s theory, an international-economy dimension as well: there might be need for more than one country to set fiscal policies in order to have some leverage…and so on.


    • #37 by W on February 24, 2013 - 12:11 am

      The position of Keynes regarding endogenous vs exogenous money (or inside vs outside money) was mainly related to the sort of monetary (or exchange-rate) world regime at works…Such a stance is perhaps obscured by the fact that from, say, the thirties onwards (from the Treatise, related with the Big Crash…onwards), the monetary-world-regimes were much like endogenous-money regimes (but arguably, only until the end of the BW era, around the Oil Shocks…and to turn again into a inside money world around the 90´s probably, according to, among others, AL: this late shift related to deregularisation plus the lack of an explicit monetary-world standard, previously represented within the Dollar…).

      But take the Tract on Monetary Reform, on the economics of postwar 1, where floating standards coexisted all along, and there you will find a rather outside money theory (the Quantity of Money being very much related to the exchange-rate level…and so on…).

      • #38 by Unlearningecon on February 26, 2013 - 1:27 pm

        Are you suggesting the there was an exogenous money regime in the 70s and 80s? If so, why did Friedman’s monetarism fail?

      • #39 by W on February 26, 2013 - 6:28 pm

        The main point in what i´ve said is that in Keynes (as compared with, broadly speaking, the mainstream MACROeconomics, in which historical institutions have almost no role) the endogenous vs exogenous money issue is a regime-dependent one: Keynes´s first book was “Indian Currency and Finance”, primarily on the workings of the Gold (or Gold-exchange) standard. Later on, he wrote the Tract (dealing with the Quantity Theory in its Cambridge version, among other things). Even later on, in the Treatise (in which there appears, probably for the first time, the “Treasury view”) he deals with the (economic consequences of the) restoring of the Sterling´s prewar value: here he take the Wicksellian restatement of the Quantity Theory (leaving room for endogenous money…). In the GT, he´s dealing explicitly with the issue of fiscal-policy as to control a (quite endogenous) money-supply.

        After BW, the United States run a sort of monetary-policy regime depicted as a Random-Walk Monetary Standard by AL (based upon Benjamin Klein): one of its central features, as far as i can rely on my own memory, was the unpredictability of exogenous-money supply…

        Anyway, what do you mean by Friedman´s monetarism failure? (by the time around the Oil Shocks and the stagflation period, for instance, as far as i can recall Monetarism debunked Keynesianism (which beared little relation to Keynes´s theory: this is the subject of AL´s 1968 book)…Rigurously speaking, both Monetarism and Keynesianism lost the battle in that period (against common sense)…

      • #40 by Unlearningecon on February 27, 2013 - 8:49 am

        What I mean is that they attempted to control the supply of money exogenously in the 1980s in UK, US and Chile and failed on all three counts.

      • #41 by W on February 27, 2013 - 5:59 am

        i should have written “…Monetarism debunked Keynesianism (which BORE little relation to Keynes´s theory). My apologies…!

      • #42 by W on February 27, 2013 - 4:47 pm

        the following words “…but arguably, only until the end of the BW era, around the Oil Shocks…” might have suggested (they were indeed, at least partly, meant to do so…) an exogenous-money period after BW.

        To be honest, by now i can only relate the early post-BW economy with the aforementioned Random Walk “monetary-standard” (a Ben Klein´s expression). For example, i can recall AL speaking of the necessity of US Fed to end the unpredictability of money-supply (particularly in a inflationary sense…) and to set a rather sound expectation-regime for the market not to behave as, among other things, one expecting for future growing inflation rates and therefore bringing higher levels of long-term bond interest rates.

        The Random Walk MS was related to the financing of the US´s Treasury déficit-policies (say, expansionary policies far beyond the taxes-growth rate and so on…). I am not quite sure (im speaking only in a very broad sense…), but probably related to previous events such as the Vietnam policy, etc. As the Treasury didn´t left such policies, the US monetary-regime wouldn´t had been able to improve, and so on…

        In the same period, however, the Asian foreign surplus had an important role, in several ways. (a huge foreign capital entry may be seen as a prime factor within an overall growing-money supply, obviously).

        A world economy within a Dollar-supply driven by the Treasury déficit needs, would easily render pretty rich dynamics regarding peripheral currencies (including their inside vs outside supply causation).

        Beside all that, i´ve just got you in the sense of talking of the inability of controlling exogenously a DOWNWARD money-supply tendency? Am i right?

  10. #43 by Will on February 24, 2013 - 10:10 pm

    “(they both, confusingly, called neoclassicals ‘classicals’)”

    That has been in my craw for years! The term “neoclassical” is already so ambiguous as to be worthless. Did you know that it was coined by Veblen, to describe late-nineteenth century English economists like Cairnes, Marshall, and Neville Keynes? He meant actually to differentiate them from the continental and American economists, since they still held on to elements of Ricardo. In present usage, it often means the exact opposite!

    • #44 by Unlearningecon on February 25, 2013 - 9:47 am

      Whatever its etymology, it can actually be well defined at this point,. assuming by ‘neoclassical’ we mean ‘mainstream.’ See this essay:

      The defining characteristic of neoclassical economics is the methdological core, which has 3 main components:

      (1) Methodological individualism – the economy is modeled on the basis of the behaviour of individual agents.

      (2) Methodological instrumentalism – individuals act in accordance with certain preferences rankings, to attain some end goal that they deem desirable.

      (3) Methodological equilibration – given the above two, economics asks what will happen if we assume equilibrium. Note that this doesn’t necessarily posit that the system will end up in equilibrium (although that is often the case), but rather seeks to find out what will happen if we use equilibrium as an epistemological starting point.

      • #45 by Boatwright on February 26, 2013 - 4:56 am

        Question: What IS our epistemological starting point?

        After following the discussion here for the last week, I am compelled to ask: What exactly is the point of economic study? Is it rational and scientific; does it yield repeatable real-world results? Or is it merely normative with a veneer of self-referential logic and arcane operations. When faced with the difficulties of modelling actual economic behavior and cycles we find economists assuming for example equilibrium, where none seems to actually exist; ignoring for example the true functions of banks; or failing for example to even acknowledge that fundamental physical laws such as those that describe thermodynamics even exist.

        Simple reflection indicates that surely the flows of energy through our economy must follow these laws. However we find that Frederick Soddy, who a century ago criticized the still prevailing view of the economy as a perpetual motion machine ever creating riches , is virtually unknown. His modern intellectual descendants, such as Herman Daly are either ignored or subject to shallow criticism and progressivist dogma.

        Far too much of what passes for “scientific” economic theory is nothing more than scholasticism, finding truth in the definitions of words and the way things ought to be, without being honest enough to study things as they are, and to admit that much of what we claim to understand may in fact be chaotic and as far as the fine grain is concerned, unknowable.

        We would do well to start with a serious and fearless look at our chosen blind spots and unexamined assumptions, We will likely find that much of what we now accept as economic truth is empty.

      • #46 by Unlearningecon on February 26, 2013 - 9:53 am

        I’m not sure about conservation laws and thermodynamics in economics. The economy is something of an organic system so I’d be more inclined to make analogies with biology than physics. It is often asserted by creationists that evolution contradicts the 2nd law of thermodynamics, but this is superficial. I think it is equally superficial to apply it to the economy in the way you suggest.

        Anyway, thanks for your comments. I agree with you about economic theory, obviously.

      • #47 by Boatwright on February 26, 2013 - 2:00 pm

        It is a common misconception to look at the laws of physics as existing in a different category from other sciences — mechanistic – non-“organic” as you note. However all science, including biology, views these laws as fundamental. Everything in the universe, all life, and all economic activity is fundamentally subject to these laws. The science of ecology views entropy as fundamental and also uses many terms identical to those used by economists to describe the flow of energy and resources through natural systems. Are we to suppose that the human economy with all of its abstractions is somehow not an ecology also subject to these same laws? This is not a trivial point.

        For example: It is an unexamined assumption of mainstream economics that resource depletion is not a problem, because the market, prompted by pricing mechanism will always find new sources or produce substitutes. In mathematical terms, this ASSUMES both nulls and infinities where no evidence is offered that they actually exist.

        This sort of thinking is simply shallow — a matter of faith — and is certainly not scientific, which brings me back to my original question: In economic theory, what is the epistemological starting point?

        Many years ago, my high-school physics teacher described the Second Law of Thermodynamics, the entropy law as, “There is no such thing as a free lunch.” — an economic statement for sure, I suspect that one reason mainstream economics chooses to ignore fundamental science is the huge problem it creates for theory. When one removes the nulls and infinities or tries to insert new terms, equilibrium disappears and the foundations crumble.

        There is a solipsistic quality to much of what passes for economic proof. Economics is crawling with reductionists and cranks, including the famous and powerful. Ideology creeps in and it is easy to be seduced by the beauty of difficult math. We need to remember what Einstein said, “If you are out to describe the truth, leave elegance to the tailor.” Economics, if it wants to grow up and become real science, must include fundamental scientific ideas in its theories and models,

      • #48 by Unlearningecon on February 28, 2013 - 8:51 pm

        Great comment.

        In economic theory, what is the epistemological starting point?

        An optimising individual or entity such as a firm, maximising some goal subject to a constraint, in equilibrium.

  11. #49 by Boatwright on March 1, 2013 - 3:41 am

    A modest proposal: I think it important that economists consider the constraints on what is knowable in the economic world and therefore what forms useful theory may take.

    I will give an example: A firm can calculate with acceptable accuracy the point at which the quantity of goods manufactured and sold at a given price will meet the fixed costs and the firm can begin to expect profits. This micro-economic model is then generalized as a component of macro-economic theory with a variety of assumptions as to equivalence, etc.. Assuming that this model of productivity and profitability can yield consistent results has a big problem however, which is the number of variables or even unknowns in any real market is large. Prices of commodity raw materials can change, changes in consumer preferences can affect sales, the weather can be unusual, etc. etc.. Factors large and small can proliferate, making accurate predictions impossible.

    In the real world. businesses adapt to the endlessly variable and chaotic real world heuristically. Economists need to accept that as elegant as their generalized theories may be, they likely yield only an idealized view of the economy. Whether it is the central bank, or politicians, or quants on Wall Street, believing that their models can reliably picture the macro-economy or events in financial markets is immature. Much of existing theory is in fact tautological, and black swans appear with alarming frequency. Economics to grow out of its present confusion needs to both narrow its expectations as it broadens its outlook to include ideas from the sciences, including engineering concepts used to model multi-variable systems, and especially the insights from ecologists and physicists who are studying the mathematics of chaotic events.

  1. Misinterpretations in Mainstream Economics | Fifth Estate
  2. Wednesday Morning Links | Iacono Research
  3. Interest Rates: Too High, Not Too Low | Unlearning Economics
  4. What on Earth is Utility, Anyway? | Unlearning Economics
  5. What on Earth is Utility, Anyway? | Fifth Estate