Archive for February, 2013

Interest Rates: Too High, Not Too Low

I have previously referenced my support for the idea, advocated by Keynes and Adam Smith, that low long term interest rates are a desirable stance for monetary policy. The claim about the effect of low rates is two fold:

(1) Low rates reduce the cost of investment and so encourage it.

(2) Low rates reduce the yields required to pay back debt incurred, and hence encourage more sustainable,  less speculative investments. To phrase it conversely: high rates push people into speculation as they attempt to recoup the money they owe.

Commenter Roman P. is not convinced by this argument. I am willing to admit I have, thus far, provided insufficient evidence for this, mostly due to lack of data. However, I have assembled what data I can below, and believe it offers broad – though not definitive – support for this hypothesis.

A few caveats. First, let me establish clear criteria for what I consider to be ‘low rates.’ John Maynard Keynes wanted the long term interest rate to be as low as 2.5%; he even remarked that 3.5% would be too high for full employment:

There is, surely, overwhelming evidence that even the present reduced rate of 3½ per cent on long-term gilt-edged stocks is far above the equilibrium level – meaning by ‘equilibrium’ the rate which is compatible with the full employment of our resources of men and equipment.

For most of the data, the rate is above even the 5% that Adam Smith thought should be the cap, lest the capital of a country be “wasted.” Obviously we shouldn’t believe something simply because Keynes and Smith did, but hopefully the evidence I present below will lend some credibility to their arguments.

Second, what matters will not be just the interest rate; expectations – and the realised trajectory – of the interest rate will also be important. If the rate is rising then it will have a similar impact on investment decisions as an already high rate. If the Central Bank (CB) is committed to a policy of low rates, then it will be far more stabilizing than if rates happen to hit a low point and subsequently bounce back. We do have a test for an explicit low rate policy: the post-WW2 arrangements. It is common knowledge that the stability in that period was unprecedented.*

Third, let me make obligatory correlation =/= causation remarks. Nevertheless, correlation at least gives us a clue about causation. A further clue is if what we think is the causal variable (interest rate) moves first, and the dependent variable (growth) moves second. It is also true that we have a valid theoretical link for our causation. Lastly, it is empirically verified that businesses consider long term rates the most important interest rate in their borrowing decisions.

So what does the evidence look like? Let’s start by taking a look at the ‘Prime Loan Rate’ in the US for the second half of the 20th century. This is the interest rate banks offer to their most stable customers, mostly big businesses:

Every single recession is preceded by an increase in rates. Not every rise in interest rates create a recession – there is one peak without a recession from around 1983-4. However, this may well be explained by movements in the base rate; it dropped from 11% to 8% in that period. By the next recession it had settled at about 6%; that recession seems to have ended when it was reduced down to 3%.

The data for the prime rate only go as far back as 1955, so I’ll use two of Moody’s corporate bond measures for the first half of the 20th century:

Again we observe a similar pattern with rate increases and recessions. Furthermore, the high rate, high volatility period between WW1 and WW2 sits in stark contrast with the low rate, low volatility period post-WW2. It’s interesting to note that rates – though high, relative to our benchmark of 2.5% – were not that high during the stock market boom of the 1920s. Certainly the spike in rates after the first crash is what seemed to bury the economy.

Update: commenter Magpie helpfully pointed out that the Moody’s data could be lagged, which is why it falls inside recessions instead of before them. Indeed, this is what we see when we compare it to the prime rate post-WW2: the spikes are late.

Interest rates in the UK show a similar pattern, as do UK recessions. Unfortunately I do not have access to – or the competence to create – graphs like the above for the UK.

Overall, it seems high or rising rates accompany periods of substantial periods of economic turmoil, else periods where speculation is rampant and bubbles are building up. It is possible the speculation fuels further rises in the interest rate as the perpetrators become overconfident about their potential gains – a positive feedback loop.

Clearly the central bank does not control corporate borrowing rates directly. However, it does control government bond rates, and I would argue that this rate, as a benchmark, has a significant impact on other interest rates in the economy. Indeed this is borne out by the data:

(For a more comprehensive, but uglier, graph of the correlation between government and corporate bond yields, see here).

A central bank committed to low rates could help quell this, as we observe in the data post-WW2. Naturally, such a policy requires a degree of monetary autonomy that central banks have not had since the Bretton Woods system was in place, else rates be disrupted by international flows.

I think the evidence presented here is a blow to the ‘too low for too long‘ meme that pervades discussion of the crisis. There seems to be a belief that low rates are somehow ‘artificial’ (relative to what, exactly?) and we need to ‘get back to reality.’ In fact, it seems that ‘checking’ a bubble may both fuel speculation and needlessly invalidate potential investments, hence creating the situation that the central bank purportedly wanted to prevent.

*Unless you lived in Guatemala or Iran, of course.

, ,

61 Comments

Misinterpretations in Mainstream Economics

It is my opinion that major areas of neoclassical economics rest on misinterpretations of original texts. Though new ideas are regularly recognised as important and incorporated into the mainstream framework, this framework is fairly rigid: models must be micro founded, agents must be optimising, and – particularly in the case of undergraduate economics – the model can be represented as two intersecting curves. The result is that the concepts that certain thinkers were trying to elucidate get taken out of context, contorted, and misunderstood. There are many instances of this, but I will illustrate the problem with three major examples: John Maynard Keynes, John Von Neumann and William Phillips.

Keynes, in two lines

It is common trope to suggest that John HicksIS/LM interpretation of Keynes’ General Theory was wrong. It is also true, and this was acknowledged by Hicks himself over 40 years after his original article.

IS/LM, or something like it, was being developed apart from Keynes by Dennis Robertson, Hicks and others during the 1920s/30s, who sought to understand interest rates and investment in terms of neoclassical equilibrium. Hence, Hicks tried to annex Keynes into this framework (they both, confusingly, called neoclassicals ‘classicals’). Keynes’ theory was reduced to two intersecting lines that looked a lot like demand-supply. The two schedules were derived from the equilibrium points of the demand and supply for money (LM), and the equilibrium points of the demand and supply for goods and services (IS). In order to reach ‘full employment’ equilibrium, the central bank could increase the money supply, or the government could expand fiscal policy. Unfortunately, such a glib interpretation of Keynes is flawed for a number of reasons:

First, Keynes did not believe that the central bank had control over the money supply:

…an investment decision (Prof. Ohlin’s investment ex-ante) may sometimes involve a temporary demand for money before it is carried out, quite distinct from the demand for active balances which will arise as a result of the investment activity whilst it is going on. This demand may arise in the following way.

Planned investment—i.e. investment ex-ante—may have to secure its ” financial provision ” before the investment takes place…There has, therefore, to be a technique to bridge this gap between the time when the decision to invest is taken and the time when the correlative investment and saving actually occur. This service may be provided either by the new issue market or by the banks;—which it is, makes no difference.

Since Hick’s model relies on a ‘loanable funds’ theory of money, where the interest rate equates savings with investment and the central bank controls the money supply, it clearly doesn’t apply in Keynes’ world. An attempt to apply endogenous money top IS/LM will result in absurdities: an increase in loan-financed investment, part of the IS curve, will create expansion in M, part of the LM curve. Likewise, M will adjust downwards as economic activity winds down. So the two curves cannot move independently, which violates a key assumption of this type of analysis.

Second, Keynes did not believe the interest rate had simple, linear effects on investment:

I see no reason to be in the slightest degree doubtful about the initiating causes of the slump….The leading characteristic was an extraordinary willingness to borrow money for the purposes of new real investment at very high rates of interest.

and:

But over and above this it is an essential characteristic of the boom that investments which will in fact yield, say, 2 per cent. in conditions of full employment are made in the expectation of a yield of, say, 6 per cent., and are valued accordingly. When the disillusion comes, this expectation is replaced by a contrary “error of pessimism”, with the result that the investments, which would in fact yield 2 per cent. in conditions of full employment, are expected to yield less than nothing…

…A boom is a situation in which over-optimism triumphs over a rate of interest which, in a cooler light, would be seen to be excessive.

So, again, the simple, mechanistic adjustments in IS/LM are inaccurate. The magnitude of the interest rate will not change just the level, but also the type of investment taking place. Higher rates increases speculation and destabilise the economy, whereas low rates encourage real capital formation. This key link between bubbles, the financial sector and the real economy was lost in IS/LM, and also in neoclassical economics as a whole.

Third – and this is something I have spoken about before - Hicks glossed over Keynes’ use of the concept of irreducible uncertainty, which was key to his theory. The result was a contradiction, something Hicks noted in the aforementioned ‘explanation’ for IS/LM. The demand for money was, for Keynes, a direct result of uncertainty, and in a time period sufficient to produce uncertainty (such as Keynes’ suggested 1 year), expectations would be constantly shifting. Since both the demand for money, savings and investment depended on expectations, the curves would be moving interdependently, undermining the analysis. On the other hand, in a time period short enough to hold expectations ‘constant’ and hence avoid this (Hicks suggested a week), there would be no uncertainty, no liquidity preference and therefore no LM curve.

Hicks’ attempt to shoehorn Keynes’ book into his pre-constructed framework led to oversimplifications and a contradiction, and obscured one of Keynes’ key insights: that permanently low long term interest rates are required to achieve full employment. The result is that Keynes has been reduced to ‘stimulus,’ whether fiscal or monetary, in downturns, and the reasons for the success of his policies post-WW2 are forgotten.

Phillips and his curve

Another key aspect-along with IS/LM-of the post-WW2 ‘Keynesian’ synthesis was the ‘Phillips Curve,’ an inverse relationship between inflation and unemployment observed by Phillips in 1958. Neoclassical economists reduced this to the suggestion that there was a simple trade-off between inflation and unemployment, and policymakers could choose where to select on the Phillips Curve, depending on circumstances.

Predictably, this is not really what Phillips had in mind. What he observed was not ‘inflation and unemployment,’ but inflation and money wages. Furthermore, it was not a static trade off, but a dynamic process that occurred over the course of the business cycle. During the slump, society would observe high unemployment and low inflation; in the boom, low unemployment would accompany high inflation. This is why, if you look at the diagrams in his original paper, Phillips has numbered his points and joined them all together – he is interested in the time path of the economy, not just a simple mechanistic relationship. The basic correlation between wages and unemployment was just a starting point.

Contrary to what those who misinterpreted him believed, Phillips was not unaware of the influence of expectations and the trajectory of the economy on the variables he was discussing; in fact, it was an important pillar of his analysis:

There is also a clear tendency for the rate of change of money wage rates at any given level of unemployment to be above the average for that level of unemployment when unemployment is decreasing during the upswing of a trade cycle and to be below the average for that level of unemployment when unemployment is increasing during the downswing of a trade cycle…

…the rate of change of money wage rates can be explained by the level of unemployment and the rate of change of unemployment.

Finally, whatever Phillips’ theoretical conclusions, it is clear he did not intend even a correctly interpreted version of his work to be the foundation of macroeconomics:

These conclusions are of course tentative. There is need for much more detailed research into the relations between unemployment, wage rates, prices and productivity.

Had neoclassical economists interpreted Phillips correctly, they would have seen that he thought dynamics and expectations were important (he was, after all, an engineer), and we wouldn’t have been driven back to the stone age with the supposedrevolution‘ of the 1970s.

An irrational approach to Von Neumann

In microeconomics, the approach to ‘uncertainty’ (a misnomer) emphasise the trade-off between potential risks and their respective payoffs. Typically, you will see a graph that looks something like the following (if you aren’t a mathematician, don’t be put off – it’s just arithmetic):

Candidate Probability Home Abroad
A 0.6 300k 200k
B 0.4 100k 200k

The question is whether a company will invest at home or abroad. There is an election coming up, and one candidate (B) is an evil socialist who will raise taxes, while the other one (A) is a capitalist hero who will lower them. Hence, the payoffs for the investment will differ drastically based on which candidate wins. Abroad, however, there is no election, and the payoff is certain in either case; the outcome of the domestic election is irrelevant.

The neoclassical ‘expected utility’ approach is to multiply the relative payoffs by the respective probability of them happening, to get the ‘expected’ or ‘average’ payoff of each action. So you get:

For investing abroad: £200k, regardless

For investing at home: (0.6 x £300k) + (0.4 x £100k) = £220k

Note: I am assuming the utility is simply equal to the payoff for simplicity. Changing the function can change the decision rule but the same problem – that what is rational for repeated decisions can seem irrational for one – will still apply.

So investing at home is preferred. Supposedly, this is the ‘rational’ way of calculating such payoffs. But a quick glance will reveal this approach to be questionable at best. Would a company make a one off investment with such uncertain returns? How would they secure funding? Surely they’d put off the investment until the election, or go with the abroad option, which is far more reliable?

So what caused neoclassical economists to rely on this incorrect definition of ‘rationality’? A misinterpretation, of course! One need look no further than Von Neumann’s original writings to see that he only thought his analysis would apply to repeated experiments:

Probability has often been visualized as a subjective concept more or less in the nature of estimation. Since we propose to use it in constructing an individual, numerical estimation of utility, the above view of probability would not serve our purpose. The simplest procedure is, therefore, to insist upon the alternative, perfectly well founded interpretation of probability as frequency in long runs.

Such an approach makes sense – if the payoffs have time to average out, then an agent will choose one which is, on average, the best. But in the short term it is not a rational strategy: agents will look for certainty; minimise losses; discount probabilities that are too low, no matter how high the potential payoff. This is indeed the behaviour people demonstrate in experiments, the results of which neoclassical economists regard as ‘paradoxes.’ A correct understanding of probability reveals that they are anything but.

Getting it right

There are surely many more examples of misinterpretations leading to problems: Paul Krugman’s hatchet job on Hyman Minsky, which completely missed out endogenous money and hence the point, was a great example. The development economist Evsey Domar reportedly regretted creating his model, which was not supposed to be an explanation for long run growth but was used for it nonetheless. Similarly, Arthur Lewis lamented the misguided criticisms thrown at his model based on misreadings of misreadings, and naive attempts to emphasise the neoclassical section of his paper, which he deemed unimportant.

This is not to say we should blindly follow whatever a particularly great thinker had to say. However, indifference toward the ‘true message’ of someone’s work is bound to cause problems. By plucking various thinker’s concepts out of the air and fitting them together inside your own framework, you are bound to miss the point, or worse, contradict yourself. Often a particular thinker’s framework must be seen as a whole if one is truly to understand their perspective and its implications. Perhaps, had neoclassical economists been more careful about this, they wouldn’t have dropped key insights from the past.

, , , ,

54 Comments

Sorry, Economists: The Crisis is a Huge Problem for Your Discipline

I recently stumbled upon a reddit post called ‘A collection of links every critic of economics should read.‘ One of the weaker links is a defence of economists post-crisis by Gilles Saint-Paul. It doesn’t argue that economists actually did a good job foreseeing the crisis; nor does it argue they have made substantial changes since the crisis. It argues that the crisis is irrelevant. It is, frankly, an exercise in confirmation bias and special pleading, and must be fisked in the name of all that is good and holy.

Saint-Paul starts by exploring the purpose of economists:

If they are academics, they are supposed to move the frontier of research by providing new theories, methodologies, and empirical findings.

Yes, all in the name of explaining what is happening in the real world! If economists claim their discipline is anything more than collective mathematical navel gazing, then their models must have real world corroboration. If this is not yet the case, then progress should be in that direction. Saint-Paul is apparently happy with a situation where economists devise new theories and all nod and stroke their beards, in complete isolation from the real world.

He continues:

If [economists] work for a public administration, they will quite often evaluate policies.

Hopefully ones that prevent or cushion financial crises, surely? Wait – apparently this is not a major consideration:

One might think that since economists did not forecast the crisis, they are useless. It would be equally ridiculous to say that doctors were useless since they did not forecast AIDS or mad cow disease.

Yet again, an economist insists on analogies to hard sciences that make no sense.

AIDs and mad cow disease were random mutations of existing diseases and so could not have been foreseen. Financial crises are repeated and have occurred throughout history. They demonstrate clear, repeated patterns: debt build ups; asset inflation; slow recoveries. Yet despite this, doctors have made more progress on AIDs and MCD in a few decades than economists have on financial crises in a few centuries. It was worrying enough that DSGE models were unable to model the Great Depression, but given that ‘it’ has now happened again, under very similar circumstances, you’d think that alarm bells might be going off inside the discipline.

Saint-Paul now starts to defend economics at its most absurd:

One example of a consistent theory is the Black-Scholes option pricing model. Upon its introduction, the theory was adopted by market participants to price options, and thus became a correct model of pricing precisely because people knew it.

This is the same model whose use has consistently been associated with financial collapse, right? Anyway…

Similarly, any macroeconomic theory that, in the midst of the housing bubble, would have predicted a financial crisis two years ahead with certainty would have triggered, by virtue of speculation, an immediate stock market crash and a spiral of de-leveraging and de-intermediation which would have depressed investment and consumption. In other words, the crisis would have happened immediately, not in two years, thus invalidating the theory.

‘A crisis will happen if these steps are not taken to prevent it’ is not the same as ‘Lehman Brothers will collapse for certain on September 15th, 2008.’  Saint-Paul confuses different levels, and types of, prediction. Nobody is suggesting economists should give us a precise date. What people are suggesting is that, by now, economists should know the key causal factors of financial crises and give advice on how to prevent them.

Saint-Paul charges critics with:

…[ignoring] that economics is a science that interacts with the object it is studying.

How he thinks this is beyond me, seeing as the whole criticism is that policies designed by economists had a hand in causing the crash. Predictably, he goes on to state a ‘hard’ version of the Lucas Critique, the go-to argument for economists defending their microfoundations:

Economic knowledge is diffused throughout society and eventually affects the behaviour of economic agents. This in turn alters the working of the economy. Therefore, a model can only be correct if it is consistent with its own feedback effect on how the economy works. An economic theory that does not pass this test may work for a while, but it will turn out to be incorrect as soon as it is widely believed and implemented in the actual plans of firms and consumers. Paradoxically, the only chance for such a theory to be correct is for most people to ignore it.

It is reasonable to suggest policy will have some impact on the behaviour of economic agents. It is absurd to suggest this will always have the effect of rendering the policy (model) useless (irrelevant). It is even more absurd to suggest that we can ever design a model that sidesteps this problem completely. What we have is a continually changing relationship between policy and economic behaviour, and this must be taken into account when designing policy. This doesn’t imply we should fall back on economist’s preferred methods, despite a clear empirical failure.

Saint-Paul moves on – now, apparently, the problem is not that economist’s theories don’t behave like reality, but that reality doesn’t behave like economist’s theories:

In other words, if market participants had been more literate in, or more trustful of economics, the asset bubbles and the crisis might have been avoided.

If only everyone believed, then everything would be fine! Obviously, the simple counterpart to this is that many investors and banks did believe in the EMH or some variant of it, yet, as always, reality had the final say, as happened with the aforementioned Black-Scholes equation.

Saint-Paul now attempts to play the ‘get out of reality completely’ card:

While it is valuable to understand how the economy actually works, it is also valuable to understand how it would behave in an equilibrium situation where the agents’ knowledge of the right model of the economy is consistent with that model, which is what we call a “rational expectations equilibrium”. Just because such equilibria do not describe past data well does not mean they are useless abstraction. Their descriptive failure tells us something about the economy being in an unstable regime, and their predictions tell is something about what a stable regime looks like.

Basically, Saint-Paul is arguing that economic models should be unfalsifiable. Since we can hazard a guess that he isn’t too bothered about unrealistic assumptions, given the models he is defending, and since he clearly doesn’t care about predictions either, he has successfully jumped the shark. Economists want to be left alone to build their models which posit conditions which are never fulfilled in the real world, and that’s final!

As if this wasn’t enough, he proceeds to castigate the idea that economists should even attempt to expand their horizons:

The problem with the “broad picture” approach, regardless of the intellectual quality of those contributions, is that it mostly rests on unproven claims and mechanisms. And in many cases, one is merely speculating that this or that could happen, without even offering a detailed causal chain of events that would rigorously convince the reader that this is an actual possibility.

Note what Saint-Paul means by “detailed causal chain of events.” He means microfoundations. But he is not concerned about whether these microfoundations actually resemble real world mechanics, only that whether they are a “possibility.” To him, the mere validity of an economic argument means that it has been ‘proven,’ regardless of its soundness. In other words: economists shouldn’t be approximately right, but precisely wrong.

Saint-Paul concludes by rejecting the idea that financial crises can be modeled and foreseen:

This presumption may be proven wrong, but to my knowledge proponents of alternative approaches have not yet succeeded in offering us an operational framework with a stronger predictive power.

It has indeed been proven wrong, as alternative models do exist.

I hope – and actually believe – that most economists don’t believe that the crisis is irrelevant for their discipline. I’m sure few would endorse the caricature of a view presented here by Saint-Paul. Nevertheless, it is common for economists to suggest that the crisis was unforeseeable: a rare event that cannot be modeled because the economy is too ‘complex.’ This must be combated. Financial crises are actually (unfortunately)  relatively frequent occurrences with clear, discernible patterns drawing them together. To paraphrase Hyman Minsky: a macroeconomic model must necessarily be able to find itself in financial crisis, otherwise it is not a model of the real world.

, ,

28 Comments

Against Friedman: Why Assumptions Matter

I have previously discussed Milton Friedman’s infamous 1953 essay, ‘The Methodology of Positive Economics.’ The basic argument of Friedman’s essay is the unrealism of a theory’s assumptions should not matter; what matters are the predictions made by the theory. A truly realistic economic theory would have to incorporate so many aspects of humanity that it would be impractical or computationally impossible to do so. Hence, we must make simplifications, and cross check the models against the evidence to see if we are close enough to the truth. The internal details of the models, as long as they are consistent, are of little importance.

The essay, or some variant of it, is a fallback for economists when questioned about the assumptions of their models. Even though most economists would not endorse a strong interpretation of Friedman’s essay, I often come across the defence ‘it’s just an abstraction, all models are wrong’ if I question, say, perfect competition, utility, or equilibrium. I summarise the arguments against Friedman’s position below.

The first problem with Friedman’s stance is that it requires a rigorous, empirically driven methodology that is willing to abandon theories as soon as they are shown to be inaccurate enough. Is this really possible in economics? I recall that, during an engineering class, my lecturer introduced us to the ‘perfect gas.’ He said it was unrealistic  but showed us that it gave results accurate to 3 or 4 decimal places. Is anyone aware of econometrics papers which offer this degree of certainty and accuracy? In my opinion, the fundamental lack of accuracy inherent in social science shows that economists should be more concerned about what is actually going on inside their theories, since they are less liable to spot mistakes through pure prediction. Even if we are willing to tolerate a higher margin of error in economics, results are always contested and you can find papers claiming each issue either way.

The second problem with a ‘pure prediction’ approach to modelling is that, at any time, different theories or systems might exhibit the same behaviour, despite different underlying mechanics. That is: two different models might make the same predictions, and Friedman’s methodology has no way of dealing with this.

There are two obvious examples of this in economics. The first is the DSGE models used by central banks and economists during the ‘Great Moderation,’ which predicted the stable behaviour exhibited by the economy. However, Steve Keen’s Minsky Model also exhibits relative stability for a period, before being followed by a crash. Before the crash took place, there would have been no way of knowing which model was correct, except by looking at internal mechanics.

Another example is the Efficient Market Hypothesis. This predicts that it is hard to ‘beat the market’ – a prediction that, due to its obvious truth, partially explains the theory’s staying power. However, other theories also predict that the market will be hard to beat, either for different reasons or a combination of reasons, including some similar to those in the EMH. Again, we must do something that is anathema to Friedman: look at what is going on under the bonnet to understand which theory is correct.

The third problem is the one I initially honed in on: the vagueness of Friedman’s definition of ‘assumptions,’ and how this compares to those used in science. This found its best elucidation with the philosopher Alan Musgrave. Musgrave argued that assumptions have clear-if unspoken-definitions within science. There are negligibility assumptions, which eliminate a known variable(s) (a closed economy is a good example, because it eliminates imports/exports and capital flows). There are domain assumptions, for which the theory is only true as long as the assumption holds (oligopoly theory is only true for oligopolies).

There are then heuristic assumptions, which can be something of a ‘fudge;’ a counterfactual model of the system (firms equating MC to MR is a good example of this). However, these are often used for pedagogical purposes and dropped before too long. Insofar as they remain, they require rigorous empirical testing, which I have not seen for the MC=MR explanation of firms. Furthermore, heuristic assumptions are only used if internal mechanics cannot be identified or modeled. In the case of firms, we do know how most firms price, and it is easy to model.

The fourth problem is related to above: Friedman is misunderstanding the purpose of science. The task of science is not merely to create a ‘black box’ that gives rise to a set of predictions, but to explain phenomena: how they arise; what role each component of a system fills; how these components interact with each other. The system is always under ongoing investigation, because we always want to know what is going on under the bonnet. Whatever the efficacy of their predictions, theories are only as good as their assumptions, and relaxing an assumption is always a positive step.

Hence, the ‘it still behaves as if it matches our theories’ mentality of economists can easily be shown to be quite absurd, for example:

Consider the following theory’s superb record for prediction about when water will freeze or boil. The theory postulates that water behaves as if there were a water devil who gets angry at 32 degrees and 212 degrees Fahrenheit and alters the chemical state accordingly to ice or to steam. In a superficial sense, the water-devil theory is successful for the immediate problem at hand. But the molecular insight that water is comprised of two molecules of hydrogen and one molecule of oxygen not only led to predictive success, but also led to “better problems” (i.e., the growth of modern chemistry).

If economists want to offer lucid explanations of the economy, they are heading down the wrong path (in fact this is something employers have complained about with economics graduates: lost in theory, little to no practical knowledge).

The fifth problem is one that is specific to social sciences, one that I touched on recently: different institutional contexts can mean economies behave differently. Without an understanding of this context, and whether it matches up with the mechanics of our models, we cannot know if the model applies or not. Just because a model has proven useful in one situation or location, it doesn’t guarantee that it will useful elsewhere, as institutional differences might render it obsolete.

The final problem, less general but important, is that certain assumptions can preclude the study of certain areas. If I suggested a model of planetary collision that had one planet, you would rightly reject the model outright. Similarly, in a world with perfect information, the function of many services that rely on knowledge-data entry, lawyers and financial advisors, for example-is nullified. There is actually good reason to believe a frictionless world such as the one at the core of neoclassicism leaves the role of many firms and entrepreneurs obsolete. Hence, we must be careful about the possibility of certain assumptions invalidating the area we are studying.

In my opinion, Friedman’s essay is incoherent even on its own terms. He does not define the word ‘assumption,’ and nor does he define the word ‘prediction.’ The incoherence of the essay can be seen in Friedman’s own examples of marginalist theories of the firm. Friedman uses his new found, supposedly evidence-driven methodology as grounds for rejecting early evidence against these theories. He is able to do this because he has not defined ‘prediction,’ and so can use it in whatever way suits his preordained conclusions. But Friedman does not even offer any testable predictions for marginalist theories of the firm. In fact, he doesn’t offer any testable predictions at all.

Friedman’s essay has economists occupying a strange methodological purgatory, where they seem unreceptive to both internal critiques of their theories, and their testable predictions. This follows directly from Friedman’s ambiguous position. My position, on the other hand, is that the use and abuse of assumptions is always something of a judgment call. Part of learning how to develop, inform and reject theories is having an eye for when your model, or another’s, has done the scientific equivalent of jumping the shark. Obviously, I believe this is the case with large areas of economics, but discussing that is beyond the scope of this post. Ultimately, economists have to change their stance on assumptions if heterodox schools have any chance of persuading them.

, , , ,

39 Comments

Economists Versus Physics

I’m not sure what it is about economics that makes both its adherents and its detractors feel the need to make constant analogies to other sciences, particularly physics, to try to justify their preferred approach. Unfortunately, this problem isn’t just a blogosphere phenomenon; it appears in every area of the field, from blogs to articles to widely read economics textbooks.

For example, not too infrequently I will see a comment on heterodox work along the lines of “Newton’s theories were debunked by Einstein but they are still taught!!!!” Being untrained in physics (past high school) myself, I am grateful to have commenters who know their stuff, and can sweep aside such silly statements. In the case of this particular argument, the fact is that when studying everyday objects, the difference between Newton’s laws, quantum mechanics and general relativity is so demonstrably, empirically tiny that they effectively give the same results.

So even though quantum mechanics teaches us that in order to measure the position of a particle you must change its momentum, and that in order to measure its momentum you must change its position, the size of these ‘changes’ on every day objects is practically immeasurable. Similarly, even though relativity teaches us that the relative speed of objects is ‘constrained’ by the universal constant, the effect on everyday velocities is too small to matter. Economists are simply unable to claim anything close to this level of precision or empirical corroboration, and perhaps they never will be, due the fact that they cannot engage in controlled experiments.

Another, more worrying example, is Greg Mankiw’s widely read Macroeconomics textbook (7th ed, p. 395), when discussing estimates of the NAIRU:

If you ask an astronomer how far a particular star is from our sun, he’ll give you a number, but it won’t be accurate. Man’s ability to measure astronomical distances is still limited. An astronomer might well take better measurements and conclude that a star is really twice or half as far away as he previously thought.

Mankiw’s suggestion astronomers have this little clue what they are doing is misleading. We are talking about people who can calculate the existence of a planet close to a distant star, based on the (relatively) tiny ‘wobble’ of said star. Astronomers have many different methods for calculating stellar distances: parallax, red shift, luminosity; and these methods can be used and cross-checked against one another. As you will see from the parallax link, there are also in-built, estimable errors in their calculations, which can help them straying too far off the mark.

While it is true that at large distances, luminosity can be hard to interpret (a star may be close and dim, or bright and far away) Mankiw is mostly wrong. Astronomers still make many, largely accurate predictions, while economist’s predictions are at best contested and uncertain, or worse, incorrect. The very worst models are unfalsifiable, such as the NAIRU Mankiw is defending, which seems to move around so much that it is meaningless.

Another example is a classic case of economists misunderstanding the use of assumptions. This is from Jehle and Reny’s textbook, Advanced Microeconomics (3rd ed, preface XVI):

In the physical world, there is ‘no such thing’ as a frictionless plane or a perfect vacuum.

Perhaps not, but all these assumptions do is eliminate a known mathematical variable. This is not the same as positing an imaginary substance (utility) just so that mathematics can be used; or assuming that decision makers obey axioms which have been shown to be false time and time again; or basing everything on the impossible fantasy of perfect competition, which the authors go on to do all at once. These assumptions cannot be said to eliminate a variable or collection of variables; neither can it be said that, despite their unrealism, they display a remarkable consistency with the available evidence.

Even if we accept the premise that these assumptions are merely ‘simplifying,’ the fact remains that engineers or physicists would not be sent into the real world without friction in their models, because such models would be useless - in fact, in my own experience, friction is introduced in the first semester. Jehle and Reny do go on to suggest that one should always adopt a critical eye toward their theories, but this is simply not enough for a textbook that calls itself ‘advanced.’ At this level  such blatant unrealism should be a thing of the past, or just never have been used at all.

Economics is a young science, so it is natural that, in search of sure footing, people draw from the well respected, well grounded discipline of physics. However, not only do such analogies typically demonstrate a largely superficial understanding of physics, but since the subjects are different, analogies are often stretched so far that they fail. Analogies to other sciences can be useful to check one’s logic, or as illuminating parables. However, misguided appeals to and applications of other models are not sufficient to justify economist’s own approach, which, like other sciences (!), should stand or fall on its own merits.

, , , ,

37 Comments

Follow

Get every new post delivered to your Inbox.

Join 836 other followers