Posts Tagged Methodology
Many economists will admit that their models are not, and do not resemble, the real world. Nevertheless, when pushed on this obvious problem, they will assert that reality behaves as if their theories are true. I’m not sure where this puts their theories in terms of falsifiability, but there you have it. The problem I want to highlight here is that, in many ways, the conditions in which economic assumptions are fulfilled are not interesting at all and therefore unworthy of study.
To illustrate this, consider Milton Friedman’s famous exposition of the as if argument. He used the analogy of a snooker player who does not know the geometry of the shots they make, but behaves in close approximation to how they would if they did make the appropriate calculations. We could therefore model the snooker player’s game by using such equations, even though this wouldn’t strictly describe the mechanics of the game.
There is an obvious problem with Friedman’s snooker player analogy: the only reason a snooker game is interesting (in the loosest sense of the word, to be sure) is that players play imperfectly. Were snooker players to calculate everything perfectly, there would be no game; the person who went first would pot every ball and win. Hence, the imperfections are what makes the game interesting, and we must examine the actual processes the player uses to make decisions if we want a realistic model of their play. Something similar could be said for social sciences. The only time someone’s – or society’s – behaviour is really interesting is when it is degenerative, self destructive, irrational. If everyone followed utility functions and maximised their happiness making perfectly fungible trade offs between options on which they had all available information, there would be no economic problem to speak of. The ‘deviations’ are in many ways what makes the study of economics worthwhile.
I am not the first person to recognise the flaw in Friedman’s snooker player analogy. Paul Krugman makes a similar argument in his book Peddling Prosperity. He argues that tiny deviations from rationality – say, a family not bothering to maximise their expenditure after a small tax cut because it’s not worth the time and effort – can lead to massive deviations from an economic theory. The aforementioned example completely invalidates Ricardian Equivalence. Similarly, within standard economic theory, downward wage stickiness opens up a role for monetary and fiscal policy where before there was none.
If such small ‘deviations’ from the ‘ideal’ create such significant effects, what is to be said of other, more significant ‘deviations’? Ones such as how the banking system works; how firms price; behavioural quirks; the fact that marginal products cannot be well-defined; the fact that capital can move across borders, etc etc. These completely undermine the theories upon which economists base their proclamations against the minimum wage, or for NGDP targeting, or for free trade. (Fun homework: match up the policy prescriptions mentioned with the relevant faulty assumptions).
I’ll grant that a lot of contemporary economics involves investigating areas where an assumption – rationality, perfect information, homogeneous agents - is violated. But usually this is only done one at a time, preserving the other assumptions. However, if almost every assumption is always violated, and if each violation has surprisingly large consequences, then practically any theory which retains any of the faulty assumptions will be wildly off track. Consequently, I would suggest that rather than modelling one ‘friction’ at a time, the ‘ideal’ should be dropped completely. Theories could be built from basic empirical observations instead of false assumptions.
I’m actually not entirely happy with this argument, because it implies that the economy would behave ‘well’ if everyone behaved according to economist’s ideals. All too often this can mean economists end up disparaging real people for not conforming to their theories, as Giles Saint-Paul did in his defence of economics post-crisis. The fact is that even if the world did behave according to the (impossible) neoclassical ‘ideal’, there would still be problems, such as business cycles, due to emergent properties of individually optimal behaviour. In any case, economists should be wary of the as if argument even without accepting my crazy heterodox position.
The fact is that reality doesn’t behave ‘as if’ it is economic theory. Reality behaves how reality behaves, and science is supposed to be geared toward modelling this as closely as possible. Insofar as we might rest on a counterfactual, it is only intended when we don’t know how the system actually works. Once we do know how the system works – and in economics, we do, as I outlined above – economists who resist altering their long-outdated heuristics risk avoiding important questions about the economy.
I have previously discussed Milton Friedman’s infamous 1953 essay, ‘The Methodology of Positive Economics.’ The basic argument of Friedman’s essay is the unrealism of a theory’s assumptions should not matter; what matters are the predictions made by the theory. A truly realistic economic theory would have to incorporate so many aspects of humanity that it would be impractical or computationally impossible to do so. Hence, we must make simplifications, and cross check the models against the evidence to see if we are close enough to the truth. The internal details of the models, as long as they are consistent, are of little importance.
The essay, or some variant of it, is a fallback for economists when questioned about the assumptions of their models. Even though most economists would not endorse a strong interpretation of Friedman’s essay, I often come across the defence ’it’s just an abstraction, all models are wrong’ if I question, say, perfect competition, utility, or equilibrium. I summarise the arguments against Friedman’s position below.
The first problem with Friedman’s stance is that it requires a rigorous, empirically driven methodology that is willing to abandon theories as soon as they are shown to be inaccurate enough. Is this really possible in economics? I recall that, during an engineering class, my lecturer introduced us to the ‘perfect gas.’ He said it was unrealistic but showed us that it gave results accurate to 3 or 4 decimal places. Is anyone aware of econometrics papers which offer this degree of certainty and accuracy? In my opinion, the fundamental lack of accuracy inherent in social science shows that economists should be more concerned about what is actually going on inside their theories, since they are less liable to spot mistakes through pure prediction. Even if we are willing to tolerate a higher margin of error in economics, results are always contested and you can find papers claiming each issue either way.
The second problem with a ‘pure prediction’ approach to modelling is that, at any time, different theories or systems might exhibit the same behaviour, despite different underlying mechanics. That is: two different models might make the same predictions, and Friedman’s methodology has no way of dealing with this.
There are two obvious examples of this in economics. The first is the DSGE models used by central banks and economists during the ‘Great Moderation,’ which predicted the stable behaviour exhibited by the economy. However, Steve Keen’s Minsky Model also exhibits relative stability for a period, before being followed by a crash. Before the crash took place, there would have been no way of knowing which model was correct, except by looking at internal mechanics.
Another example is the Efficient Market Hypothesis. This predicts that it is hard to ‘beat the market’ – a prediction that, due to its obvious truth, partially explains the theory’s staying power. However, other theories also predict that the market will be hard to beat, either for different reasons or a combination of reasons, including some similar to those in the EMH. Again, we must do something that is anathema to Friedman: look at what is going on under the bonnet to understand which theory is correct.
The third problem is the one I initially honed in on: the vagueness of Friedman’s definition of ‘assumptions,’ and how this compares to those used in science. This found its best elucidation with the philosopher Alan Musgrave. Musgrave argued that assumptions have clear-if unspoken-definitions within science. There are negligibility assumptions, which eliminate a known variable(s) (a closed economy is a good example, because it eliminates imports/exports and capital flows). There are domain assumptions, for which the theory is only true as long as the assumption holds (oligopoly theory is only true for oligopolies).
There are then heuristic assumptions, which can be something of a ‘fudge;’ a counterfactual model of the system (firms equating MC to MR is a good example of this). However, these are often used for pedagogical purposes and dropped before too long. Insofar as they remain, they require rigorous empirical testing, which I have not seen for the MC=MR explanation of firms. Furthermore, heuristic assumptions are only used if internal mechanics cannot be identified or modeled. In the case of firms, we do know how most firms price, and it is easy to model.
The fourth problem is related to above: Friedman is misunderstanding the purpose of science. The task of science is not merely to create a ‘black box’ that gives rise to a set of predictions, but to explain phenomena: how they arise; what role each component of a system fills; how these components interact with each other. The system is always under ongoing investigation, because we always want to know what is going on under the bonnet. Whatever the efficacy of their predictions, theories are only as good as their assumptions, and relaxing an assumption is always a positive step.
Consider the following theory’s superb record for prediction about when water will freeze or boil. The theory postulates that water behaves as if there were a water devil who gets angry at 32 degrees and 212 degrees Fahrenheit and alters the chemical state accordingly to ice or to steam. In a superficial sense, the water-devil theory is successful for the immediate problem at hand. But the molecular insight that water is comprised of two molecules of hydrogen and one molecule of oxygen not only led to predictive success, but also led to “better problems” (i.e., the growth of modern chemistry).
If economists want to offer lucid explanations of the economy, they are heading down the wrong path (in fact this is something employers have complained about with economics graduates: lost in theory, little to no practical knowledge).
The fifth problem is one that is specific to social sciences, one that I touched on recently: different institutional contexts can mean economies behave differently. Without an understanding of this context, and whether it matches up with the mechanics of our models, we cannot know if the model applies or not. Just because a model has proven useful in one situation or location, it doesn’t guarantee that it will useful elsewhere, as institutional differences might render it obsolete.
The final problem, less general but important, is that certain assumptions can preclude the study of certain areas. If I suggested a model of planetary collision that had one planet, you would rightly reject the model outright. Similarly, in a world with perfect information, the function of many services that rely on knowledge-data entry, lawyers and financial advisors, for example-is nullified. There is actually good reason to believe a frictionless world such as the one at the core of neoclassicism leaves the role of many firms and entrepreneurs obsolete. Hence, we must be careful about the possibility of certain assumptions invalidating the area we are studying.
In my opinion, Friedman’s essay is incoherent even on its own terms. He does not define the word ‘assumption,’ and nor does he define the word ‘prediction.’ The incoherence of the essay can be seen in Friedman’s own examples of marginalist theories of the firm. Friedman uses his new found, supposedly evidence-driven methodology as grounds for rejecting early evidence against these theories. He is able to do this because he has not defined ‘prediction,’ and so can use it in whatever way suits his preordained conclusions. But Friedman does not even offer any testable predictions for marginalist theories of the firm. In fact, he doesn’t offer any testable predictions at all.
Friedman’s essay has economists occupying a strange methodological purgatory, where they seem unreceptive to both internal critiques of their theories, and their testable predictions. This follows directly from Friedman’s ambiguous position. My position, on the other hand, is that the use and abuse of assumptions is always something of a judgment call. Part of learning how to develop, inform and reject theories is having an eye for when your model, or another’s, has done the scientific equivalent of jumping the shark. Obviously, I believe this is the case with large areas of economics, but discussing that is beyond the scope of this post. Ultimately, economists have to change their stance on assumptions if heterodox schools have any chance of persuading them.
I’m not sure what it is about economics that makes both its adherents and its detractors feel the need to make constant analogies to other sciences, particularly physics, to try to justify their preferred approach. Unfortunately, this isn’t just a blogosphere phenomenon; the type of throwaway suggestion you get in internet debates. This problem appears in every area of the field, from blogs to articles to widely read economics textbooks.
Not too infrequently I will see a comment on heterodox work along the lines of “Newton’s theories were debunked by Einstein but they are still taught!!!!” Being untrained in physics (past high school) myself, I am grateful to have commenters who know their stuff, and can sweep aside such silly statements. As far as this particular argument – which is actually quite common – goes, the fact is that when studying everyday objects, the difference between Newton’s laws, quantum mechanics and general relativity is so demonstrably, empirically tiny that they effectively give the same results.
Even though quantum mechanics teaches us that in order to measure the position of a particle you must change its momentum, and that in order to measure its momentum you must change its position, the size of these ‘changes’ on every day objects is practically immeasurable. Similarly, even though relativity teaches us that the relative speed of objects is ‘constrained’ by the universal constant, the effect on everyday velocities is too small to matter. Economists are simply unable to claim anything close to this level of precision or empirical corroboration, and perhaps they never will be, due the fact that they cannot engage in controlled experiments.
If you ask an astronomer how far a particular star is from our sun, he’ll give you a number, but it won’t be accurate. Man’s ability to measure astronomical distances is still limited. An astronomer might well take better measurements and conclude that a star is really twice or half as far away as he previously thought.
Mankiw’s suggestion astronomers have this little clue what they are doing is misleading. We are talking about people who can calculate the existence of a planet close to a distant star, based on the (relatively) tiny ‘wobble’ of said star. Astronomers have many different methods for calculating stellar distances: parallax, red shift, luminosity; and these methods can be used and cross-checked against one another. As you will see from the parallax link, there are also in-built, estimable errors in their calculations, which can help them straying too far off the mark.
While it is true that at large distances, luminosity can be hard to interpret (a star may be close and dim, or bright and far away) Mankiw is mostly wrong. Astronomers still make many, largely accurate predictions, while economist’s predictions are at best contested and uncertain, or worse, incorrect. The very worst models are unfalsifiable, such as the NAIRU Mankiw is defending, which seems to move around so much that it is meaningless.
In the physical world, there is ‘no such thing’ as a frictionless plane or a perfect vacuum.
Perhaps not, but all these assumptions do is eliminate a known mathematical variable. This is not the same as positing an imaginary substance (utility) just so that mathematics can be used; or assuming that decision makers obey axioms which have been shown to be false time and time again; or basing everything on the impossible fantasy of perfect competition, which the authors go on to do all at once. These assumptions cannot be said to eliminate a variable or collection of variables; neither can it be said that, despite their unrealism, they display a remarkable consistency with the available evidence.
Even if we accept the premise that these assumptions are merely ‘simplifying,’ the fact remains that engineers or physicists would not be sent into the real world without friction in their models, because such models would be useless - in fact, in my own experience, friction is introduced in the first semester. Jehle and Reny do go on to suggest that one should always adopt a critical eye toward their theories, but this is simply not enough for a textbook that calls itself ‘advanced.’ At this level such blatant unrealism should be a thing of the past, or just never have been used at all.
Economics is a young science, so it is natural that, in search of sure footing, people draw from the well respected, well grounded discipline of physics. However, not only do such analogies typically demonstrate a largely superficial understanding of physics, but since the subjects are different, analogies are often stretched so far that they fail. Analogies to other sciences can be useful to check one’s logic, or as illuminating parables. However, misguided appeals to and applications of other models are not sufficient to justify economist’s own approach, which, like other sciences (!), should stand or fall on its own merits.
Recently, I’ve been reading a lot from the school of institutional economics. Consequently, I have noticed another problem with the way economists approach theory and evidence: the lack of institutional considerations. This can blind economists to the fact that they may be studying entirely different phenomenon due to differences between countries, periods of history, companies, genders, cultures and much more.
The standard procedure of economists is to derive a model rigorously, based on a set of assumptions or axioms. Economists, unlike physicists, cannot perform controlled experiments in order to verify these models. Instead, empirical corroboration entails the use of econometrics to verify predictions. Economists must rely on collections of data, sometimes from disparate sources, and try to ‘correct’ these collections of data for said disparities. Economists then perform regressions in an attempt to isolate the relationship between two variables, and cautiously interpret the results. As explained more fully in the paragraphs below, the problem with this approach is that institutional differences could mean that some of the data collections are simply irrelevant, whether or not they disagree with the predictions of the theory in question.
Problems with this Methodology
It appears that underlying this methodology used by economists to evaluate and analyze collections of data is a search for unifying principles that can be applied to all economies across space and time. The economic models of both neoclassical and heterodox schools reflect evidence a discipline aiming to isolate the true mechanics of the economy and build a model around it. The mentality often seems to be that, if only we could isolate the true mechanics of the economy, we’d be able to understand the economy and make informed policy decisions based on our ideal framework.
I expect many economists would probably agree that the institutional, legal, and cultural contexts are not the same for all economies. However, many economic models and the economist’s rhetoric reflect a discipline looking to uncover an equivalent of physical laws. Indeed, Larry Summers went so far as to claim that “the laws of economics are like the laws of engineering. One set of laws works everywhere.”
Even though most rational minds would disagree with Larry Summers, I find there is a tendency among economists to imagine that the institutional, legal, and cultural contexts are viewed as ‘constraints’ against which the ‘underlying mechanics’ of the economy are continually pushing. However, there is good reason to believe that the ‘real’ mechanics of the economy are determined by the context in which the economy operates, rather than said context merely influencing the economy exogenously. Here are some historic and contemporary examples to illustrate my point.
Industrialisation: the US versus England
English firms were fairly small during the industrial revolution. For reasons beyond the scope of this blog post, firms typically took it upon themselves to educate and train new employees on the job. Such a system diminishes the need for state education, at least from a labour market standpoint, and it wasn’t until the late 19th century that public education was finally established, by which time England was industrialised and the old system was becoming obsolete. In contrast, the USA followed a different path. During the growth period of the US, firms generally emphasised large production lines, and had a more ‘flexible’ approach to employment. Such an approach required that firms could rely on the competence of the average worker, and over the course of the US industrial revolution state education increased substantially, reaching something approximating a fully public system at around the same time as England, even though England was much later in its development phase. Both strategies successfully industrialised their countries; both presented different needs from a policy perspective. But using a single model to inform policy in these two countries would clearly be a mistake.
A similar contrast can be seen with Denmark and Japan. Historically, Japan has had a policy of lifelong employment, which means a majority of workers are, well, employed for life (the model may be waning due to the effects of the lost decade, but it was robust during Japan’s impressive industrialisation period). What would be the effect of restrictions on hiring and firing with such a model? It’s highly unlikely there would be much effect; in fact, the model itself is partly based on such regulations. But what if similar restrictions were applied to Denmark’s dynamic ‘flexicurity‘ model, in which hiring and firing is incredibly easy but there are strong social safety nets? I expect it would cause a lot of problems for employers and employees alike, as Danish firm’s strategies are built around being able to gain and shed workers quickly. On top of that, the safety net makes workers more willing to accept such treatment, as well as having obvious humanitarian attractions.
Again, though these two models are different – almost diametrically opposed, in fact – both have coped with recessions relatively well (in terms of unemployment). The countries simply have different institutions that operate under different mechanics, and no model could capture both (feel free to read that as a challenge). Despite this, Japan has recently enacted some ‘neoliberal’ reforms, perhaps based on the mistaken belief that they need to ‘free up’ the ‘underlying’ mechanics of the economy. Time will tell whether or not this was a smart move.
The Scandinavian Ideal
Apart from labour markets, there is another good example of interdependent institutions, laws and culture: the oft-cited Sweden. Both free marketeers and leftists like to hold Sweden up as an example of their ideas in action. “Look at the vast redistribution, unions and public goods!” Is the cry of the leftists. Meanwhile, the rightists will assert that beneath such institutions lies a relatively light touch, ‘neoliberal’ regulatory structure. In any many ways both are right; but in many more ways they are both wrong. Both approaches take the economy of Sweden and suggest that due to X, Y or Z policy, it is the way to go. But neither appreciate how the institutions identified by both fit together.
Sweden is historically a high-trust society and as such regulation is relatively simple. Even contract law is far less complex than that you will find in the UK or the States. Many businesses do something akin to ‘self regulation,’ reporting their own data to government agencies. Similarly, while it is questionable whether the generous welfare state is a cause of the trust, it is not unreasonable to suggest that the two are complementary. Furthermore, as in the case of Denmark, generous safety nets go well with light regulation in terms of dynamism. The approach has serious attractions, but only if the two institutions are combined: furthermore, it may well be the case that trust is a necessary condition for both of these institutions in the first place. Once more it is clear that certain historical circumstances have given rise to a specific set of ‘optimal’ policies that could not be applied elsewhere.
So if we take data points from between such disparate countries, is it really meaningful to try and ‘adjust’ them for this type of difference? What we are studying are economies with very different underlying mechanics. To aggregate over them and take the average result is to reduce the data to meaninglessness. What is needed is a historical, institutional perspective that understands how different aspects of the economy fit together, and how the economy fits into the background of politics, history, culture (not to mention to environment – for example, on an island country, even a corner shop can be a monopoly).
What is best for an economy will depend on initial conditions and current institutions. These institutions are not ‘artificial’ impositions on the underlying economy; they are inevitable political decisions which have been born out of specific historical context, and hopefully fit the culture of the nation in question. It would be at best costly and destructive, and at worst basically impossible, to uproot these institutions in search of some ideal. As such, any discussion of economic policy must proceed based on acknowledgment of the mechanics created by different institutions.
Much of what I’m saying isn’t new at all. In fairness, most empirical economic papers are careful about announcing they have found surefire causal links. And there might be new techniques in econometrics that attempt to deal with the problems in the methodology I outlined above. Furthermore, I am not suggesting economists are not at all concerned with institutions or history: development economists and Industrial Organisation economists speak of them frequently. Nevertheless, I believe the institutional considerations I described above create a clear methodological problem for large amount of economic theory, particularly macro.
This is because institutional considerations are a good reason that social scientists should be even more concerned about assumptions and real world mechanics than the physical sciences, and therefore that economists should be highly concerned with the historical, institutional and legal context of the economies they are studying. Such considerations are another nail in the coffin of Milton Friedman’s methodology, which posits that abstract models based on “unrealistic” assumptions are the appropriate approach to economic theory. Such an approach cannot even begin to comprehend institutional differences, and as such, applying any one theory – or group of theories – to every economy is bound to cause problems.
A while back I wrote a short post on why I reject Austrian theories of the business cycle (ABCT). Austrians were not impressed. I still retain similar objections, though over time I have realised there are more reasonable adherents of the Austrian school (though being reasonable basically forces them to conclude demand-side recessions are a possibility). This post will hopefully be more comprehensive than my previous one, but again is only based on a few major observations/objections, and will echo some of my previous comments.
I have said a few times that I see Austrian economics as part of the marginalist tradition (as did Mises). Since I am critical of this tradition, a part of my objection is the application of the same criticisms to Austrians: the idea that ‘factors of production’ are rewarded according to their productivities is subject to all sorts of critiques; similarly, the Austrian treatment of capital is sometimes vulnerable to the problems highlighted in the Capital Controversies. However, since I have already posted on this, and will likely do so again in the future, I will avoid this issue and instead criticise the Austrian school directly.
This post will be two pronged: first, I will explore the Austrian methodology in general; specifically, praxeology. Second, I will ask whether the theoretical implications of Austrian economics -regardless of praxeology – can be sustained.
Praxeology is the notion that economic theory can be built up a priori from the action axiom, or, as Mises stated it:
Human action is purposeful behaviour Or we may say: Action is will put into operation and transformed into an agency, is aiming at ends and goals, is the ego’s meaningful response to stimuli and to the conditions of its environment, is a person’s conscious adjustment to the state of the universe that determines his life. Such paraphrases may clarify the definition given and prevent possible misinterpretations. But the definition itself is adequate and does not need complement of commentary.
It is worth stating that Hayek, and other Austrians, probably rejected this, at least as a rigid rule. So the critique applies mostly to Miseans. I have two points to make about it:
First, I think the axiom itself is flawed. While it is fair to say human action can be a purposeful response to stimuli in order to obtain certain ends, that is not the same as saying that this is always the case. Action can be purposeful; it can also be knee-jerk, confused, accidental, arbitrary or even meaningless. Sometimes the action itself is the end. This poses a problem for the ‘try to disprove the action axiom‘ test, which asserts that by trying to disprove the axiom you validate it through your purposeful behaviour (yes, it is an intellectual ‘I know you are but what am I?’). But all this does it show that action can be purposeful. By trying to disprove it, I am acting purposefully, but this doesn’t mean all of my actions are purposeful.
Second, even if we accept the action axiom, we run into problems. It’s simply not at all clear how to get from a tautological statement to elaborate theories of the central bank. Blogger ‘Lord Keynes’ has discussed this - it’s clear that Mises had to introduce other assumptions and propositions to build his theory. Mises even admitted this directly with the disutility of labour:
The disutility of labor is not of a categorical and aprioristic character. We can without contradiction think of a world in which labor does not cause uneasiness, and we can depict the state of affairs prevailing in such a world.
Ultimately, I see no need to invoke praxeology when talking about theory. We can discuss the logic of whether low interest rates cause bubbles, or look at the evidence. We can examine other propositions of Austrian theory. But why do we need the human action axiom? The substantive theory is where we must turn to determine whether or not Austrians are correct.
The Natural Rate of Interest
Hayek’s original theory of the business cycle, first fully expounded in Prices and Production, rested on an equilibrium between saving and borrowing different goods.* The market would set the equilibrium rate at which different goods were borrowed, meaning the savings were matched to investment and there was no excess credit expansion. However, Piero Sraffa – in what is widely regarded as a devastating review of the book – observed that in a monetary economy, the money rate of interest would be an aggregate of all the ‘natural rates’ between different goods. Hence there was no reason to believe it would correspond to an equilibrium between every, or perhaps even any, particular good.
This issue comes up again and again, and while the overwhelming majority of Austrians appear to have conceded Sraffa’s criticism that the is no natural rate of interest. However, many seem to think it doesn’t matter – and this is not unique to Austrians. For me, the natural rate of interest matters: if there is no ‘natural’ or ‘correct’ rate of interest, how do we measure a deviation from the ideal?
It is true that fluctuations in the base rate do affect house prices – being directly linked to mortgages as they are – but nevertheless, Austrian theory doesn’t seem to deal well with housing bubbles. This is because they generally involve people continually buying and selling the same houses to each other and hence have small amounts of capital misallocation; in fact, there is a shortage of housing in many developed countries, while existing houses remain highly priced. This doesn’t make sense under an Austrian framework, which would require overinvestment in houses and hence liquidation of existing surplus stocks.
For me, interest rates are nothing special. They represent a cost for businesses, to be factored into their decision-making along with other costs. As Joseph Stiglitz says, is it a problem when business’ supply costs are too low? Does it lead them to expand too much? It seems to me that when banks are lending money for the wrong things, it’s a regulatory rather than monetary problem (insofar as it is a monetary problem, I would say it’s caused by high interest rates, but that’s for another time).
Furthermore, this ‘naturalistic’ problem with Austrianism isn’t limited to the rate of interest. There always seems to be some supposedly neutral laissez-faire, baseline state, which is never defined. Surely limited liability laws affect the decisions of businesses? What about the practical problems with property rights and contract law: the limited resources of the legal system (and hence dismissal of small cases); implicit contracts; rental laws, car crash liabilities, insurance claims and much more? All of these will contain somewhat arbitrary decisions, and all will impact the workings of a capitalist economy, possibly leading to capital misallocation. Overall, it is difficult to find a solid foundation for the supposedly ‘natural’ baseline on which Austrian theory seems to be built.
Overall, I remain unconvinced. I expect Ludwig Lachmann and similar economists are well worth reading, particularly for their stances on expectations and entrepreneurial strategies. But nothing I’ve seen from ‘mainstream’ Austrians has yet convinced me that it is worth delving into either 1000 page tomes by Mises or Rothbard, or practically unreadable (economic) works by Hayek, in order to try to further my understanding of their theories. There are just too many issues – conceptual, logical or evidential – with what I know so far.
But then, the internet is surely the place for Austrians to prove me wrong.
*It is worth noting that Austrians appear to rely on an exogenous money model with their talk of equilibrating savings and investment, and their idea that credit expansion results from central bank expansion. As I have documented, this is not how banking works. However, some Austrians have incorporated this insight, while others are against FRB altogether, so it’s not a problem for all of them.
Nick Rowe offers a summary of the Cambridge Capital Controversies that, though it is tongue in cheek and should not be taken too seriously, substantively leaves a lot to be desired. He states that the debate started because “some economists in Cambridge UK wanted to explain prices without talking about preferences.” This is false – the debate started because Joan Robinson and Piero Sraffa took issue with a production function that used an aggregate capital stock k, measured in £, with a marginal productivity. However, despite the faulty summary of the controversies, and to Rowe’s credit, some good discussion followed in the comments.
Sraffa built up an entire model just to critique neoclassical theory. It followed neoclassical logic, but replaced the popular measure of capital with a more consistent one: summing up the labour required to produce it, and the profit made from it. His model of capitalism started with simplistic assumptions, but increased in complexity. Within the confines of his own model, he showed several things: the distribution between wages and profits must be known before prices can be calculated; demand and supply are not an adequate explanation of prices, and the rate of interest can have non-linear effects on the nature of production. I cover this in more detail here.
Rowe’s primary criticism of Sraffa is that his model did not use preferences, which is a criticism also made by others. But eliminating preferences is a neglibility assumption: we ignore some element of the system we are studying, in the hope that we can either add it later, or it is empirically negligible. As Matias Vernengo notes in the comments, Sraffa was deliberately trying to escape the subjective utility base of neoclassical economics in favour of the classical tradition of social and institutional norms, so he assumed preferences were given. This is just a ceteris paribus assumption, which economists usually love! In any case it turns out that preferences can be added to a Sraffian model, with many of the key insights still remaining. Indeed Vienneau’s model (and, apparently, the work of Ian Steedman, with whom I am unfamiliar) invoke utility maximisation and come to many of the same Sraffian conclusions about demand-supply being unjustified.
Rowe also criticises Sraffa’s approach because it puts production first, over the consumer sovereignty upon which neoclassical economics is built. Should preferences provide an explanation of decisions? It appears Rowe does not take seriously the ‘chicken and egg’ problem with neoclassical models – surely, production must occur first, yet models such as Arrow-Debreu take prices as a given for firms, before anything is made.
In a modern capitalist economy, it seems illogical to say that the demand for a particular good comes first, then the supply follows as firms passively try to accommodate it. If it were true, advertising wouldn’t exist, or would be incredibly limited. It is fair to say that, independently, people have a ‘preference’ (though I’d say instinct) for food, shelter, clothing, security and other creature comforts. However, demand for most goods and services beyond this is certainly generated by advertising, marketing and other exogenous factors - advertising and marketing are one of the two primary expansion constraints experienced by real world firms (the other is financing, which, incidentally, neoclassical models often assume away too, but I digress).
An alternative way to model human behaviour would be an institutional/social norm perspective: while people instinctively want to subsist, what exactly they choose to subsist on is in large part dependent on their surroundings. There is the example of tea consumption in Britain, which started as a luxury and took decades to filter down to the lower classes. Similarly, if I had been born in India, I would probably have more of a taste for spicy foods. It’s hard to deny these things are largely dependent on social surroundings, rather than individualistic consumer preferences. Similarly, Rowe’s focus on the time-preference explanation of the interest rate seems to ignore that this will be largely dependent on institutional factors such as the state of the economy.
From an individual perspective, perhaps Maslow’s Hierarachy is a useful way of understanding purchasing decisions: after people have obtained basic needs such as food and security, things they buy are to do with identity and emotion. Don’t believe me? These concepts are exactly what firms use to try to expand their market base (for a longer treatment, see Adam Curtis’ documentary). If people don’t buy products because firms associate them with ‘self actualisation,’ then firms are systemically irrational.
Overall, I don’t think there are there any cases in which we can evaluate individual’s preferences outside a social and institutional context. Sraffa considers the economy as a whole, and leaves subsequent questions about consumers to be answered later – which they have been. Conversely, putting preferences first and having firms passively accommodate their demand runs into several logical problems, and does not corroborate with what we know about both firms and people in the real world.
Chapter 8 of Steve Keen’s Debunking Economics channels a paper (it’s short, and worth reading) by the Philosopher Alan Musgrave, which distinguishes between the 3 types of assumptions: negligibility, domain and heuristic.
According to Friedman’s 1953 essay, theories are significant when they “explain much by little,” and to this end “will be found to have assumptions that are wildly unrealistic…in general, the more significant the theory, the more unrealistic the assumptions.” By distinguishing between the different types of assumption Musgrave shows how Friedman misunderstands the scientific method, and that his argument is only partially true for one type: negligibility assumptions, which we will look at first.
Neglibility assumptions simply eliminate a specific aspect of a system – friction, for example – when it is not significant enough to have a discernible impact. Friedman is correct to argue that these assumptions should be judged by their empirical corroboration, but he is wrong to say that they are necessarily ‘unrealistic’ – if air resistance is negligible then it is in fact realistic to assume a vacuum. I don’t regard many economic assumptions as fitting into this category, though many of the examples Friedman argues a theory would need to be ‘truly’ realistic, such as eye colour, fit the bill.
If a theory fails to corroborate with the evidence, it may be because the phenomenon under investigation does require that air resistance is taken into account. So the previous theory becomes a ‘domain’ theory, for which the conclusions only apply as long as the assumption of a vacuum applies. Contrary to Friedman, the aim of ‘domain’ assumptions is to be realistic and wide ranging, so that the theory may be useful in as many situations as possible. Many of the assumptions in economics are incredibly restrictive in this sense, such as assuming equilibrium, neutrality of money or ergodicity.
A heuristic assumption is a counterfactual proposition about the nature of a system, used to investigate it in the hope of moving on to something better. These can also be retained to guide students through the process of learning about the system. If a domain assumption is never true, then it may transform into a heuristic assumption, as long as there is an eye to making the theory more realistic at a later stage. The way Piero Sraffa builds up his theory of production is a good demonstration of this approach: starting with a few firms, no profit, no labour, and ending up with multiple firms with different types of capital and labour. In this sense many economic models are half-baked, in that they retain assumptions that are unrealistic for phenomena that are not ‘negligible,’ even at a high level.
Musgrave colourfully describes the evolution of scientific assumptions:
what in youth was a bold and adventurous negligibility assumption, may be reduced in middle-age to a sedate domain assumption, and decline in old-age into a mere heuristic assumption.
Musgrave is partially wrong in this formulation, in my opinion – assumptions can start out as heuristics and become domain later on, such as perfect gas or optimising bacteria. But there are always strict criteria for when the theory built on the assumption simply becomes useless, and there is always a view to discarding the heuristic when something better comes along. Economic theory tends to weave between the different types of assumptions without realizing/drawing attention to them.
Keen ironically notes assumptions obviously matter to economists – they just have to be Lucas Approved™. The reaction by many neoclassical journals to papers such as his, which do not toe the party line with assumptions, demonstrates his point effectively. He also points out that, in fairness to neoclassical economists, the hard sciences are not necessarily the humble havens they are made out to be, and to this day physicists are resistant to questioning accepted theories. However, it is true that economists seem to be more vehement in the face of contradictory evidence than anywhere else.
I see this as a case closed on Friedman’s methodology. Economists need to learn to draw attention to exactly which type of assumption they are making in order for the science to progress, else risk having no clear parameters for where a theory should be headed, and under which conditions it can be considered valid.
I have alluded to the fact that I see neoclassical and Austrians economics as broadly part of the same intellectual movement. At first, I was unable to pinpoint exactly why this was, other than the fact that they both shared a governments versus markets mentality, and the only major policy difference between neoclassical libertarians and (minarchist) Austrian libertarians was the latter’s disdain for central banking (I am also informed that Milton Friedman repudiated his support for Central Banking later in life. But didn’t he argue…eh, forget it).
Regardless, I have realised that the more substantive reason for this equivalence is that the two share the same methodology. Whilst Austrians might reject this at first glance, allow me to go through each methodological tool, as expressed by Arnsperger & Varoufakis, used by neoclassical economics and compare it to Austrian analysis:
(1) Methodological individualism. This one is not particularly controversial – both neoclassicals and Austrians build up their economic models from the behaviour of individual agents. Austrians are generally more reductionist, whilst neoclassicals are prepared to abandon it for AD/AS analysis, but the majority of neoclassical theories retain this approach.
(2) Methodological instrumentalism. This means behaviour is generally preference driven, and action is defined to attain some end state. For neoclassicals this is utility maximisation:
Economists use the term utility to describe the satisfaction or enjoyment derived from the consumption of a good or service. If we assume that consumers act rationally, this means they will choose between different goods and services so as to maximize total satisfaction or total utility.
For Austrians it does not necessarily revolve around maximising anything, but still shares the same ‘actions are aimed to achieve some end’ characteristic:
Human action is purposeful behavior. Or we may say: Action is will put into operation and transformed into an agency, is aiming at ends and goals, is the ego’s meaningful response to stimuli and to the conditions of its environment, is a person’s conscious adjustment to the state of the universe that determines his life.
In both cases the theories revolve around revealed preference – what people actually do is meaningful, and we will build our theories around that assumption.
(3) Methodological equilibration. This means that analysis asks what behaviour we should expect, given the economy is in equilibrium. This is the one most likely to be resisted by Austrians, who generally insist that they study the economy as if it is permanently evolving and in disequilibrium. However, this paper on the subject disagrees:
Mises’ understood the of market process as a series of shifting imperfect equilibria, or plain states of rest. Hayek had views similar to Mises on equilibrium, but he added in the concept of a personal state of rest to Austrian theory. Lachmann accepted the basic elements of the Mises-Hayek theory of shifting equilibrium.
Mises and Hayek’s approach of starting in equilibrium and then asking whether that equilibrium is unique and stable echoes the approach of neoclassical economics, which generally assumes equilibrium to begin with, then looks at whether the system has a tendency away from that equilibrium, towards others or to stay in the same place.
Blogger ‘Lord Keynes’ has also commented on the reliance of many Austrians on some form of equilibrium analysis, noting that Mises and Rothbard thought the economy had a long term tendency towards equilibrium, whilst Hayek used equilibrium as an epistemological starting point. LK appears to think that Lachmann did not fall into these traps, in opposition to the paper above, but I am not sufficiently well versed in Lachmann’s work to comment.
It’s reasonably uncontroversial to note that elements of the neoclassical and Austrian school have the same origins in Menger and Walras, and the Austrians originally split from the neoclassicals to pursue a different path. However, it seems they still took many of the important concepts with them when they left, and to me its clear that many of these remain today.
As it happens, an essay by Christian Arnsperger & Yanis Varoufakis may provide us with the answer. In this essay, Arnsperger and Varoufakis attempt to define neoclassical methodology, hoping to nullify its lizard-like ability to dispose of certain parts in order to evade criticism. Personally, I think they hit the nail on the head.
They provide three axioms which define neoclassical methodology:
(1) Methodological individualism – the economy is modeled on the basis of the behaviour of individual agents.
(2) Methodological instrumentalism – individuals act in accordance with certain preferences rankings, to attain some end goal that they deem desirable.
(3) Methodological equilibration – given the above two, macroeconomics asks what will happen if we assume equilibrium. Note that this doesn’t necessarily posit that the system will end up in equilibrium (although that is often the case), but rather seeks to find out what will happen if we use equilibrium as an epistemological starting point.
I will not criticise the axioms here, but suffice to say that this gets to the crux of what the arguments have been about. This methodological core underlies everything from demand-supply to game theory to DSGE.
Much like the assumption of circular orbit, the methodological core of neoclassicism is at all times protected as it develops. Most neoclassical economists don’t think twice about the axioms, and this helps them deny that they are, in fact, ‘neoclassical’, seeing it only as a buzz word used by their enemies.
In fact, neoclassical economics has a habit of preserving not only these three axioms, but also many other assumptions it introduces. For example, take the case of Krugman and Eggertson versus Keen. Keen models the banks as explicit agents and creators of purchasing power, whilst Krugman and Eggertson preserve the ‘banks as intermediaries between savers and borrowers’ line, abstracting them out the economy, and ad-hocing a role for private debt.
You can also see these axioms in criticisms of Keen’s models. Krugman says that there is ‘a lot of implicit theorising’ going on in Keen’s paper. Perhaps this is true and maybe Keen needs to clarify his epistemology, but what Krugman really means - unknowingly, perhaps – is that Keen doesn’t start from the three axioms: he isn’t looking at individual behaviour, instead at the flow of money between agents; nobody is acting in accordance with attaining certain preferences; equilibrium is not used as a starting point. From my experience, I strongly suspect that most mainstream economists feel a similar skepticism when reading Keen’s paper.
I believe that in order for the debate to move forward, these 3 axioms – and others that are protected by the ad hoc style of DSGE – must be focused on and criticised. Otherwise critics will never land a convincing blow, and will be forever accused of straw manning.
* As a note, Austrians, this is why I link you with neoclassicism. The first two certainly define all of Austrian economics, and, at least in the case of Hayek, you also use equilibrium as an epistemological starting point.
My previous post on assumptions was not quite rigorous enough in its definition of assumptions, and attracted some skeptical feedback from the commenter named isomorphisms. Allow me to reiterate my point more clearly.
The distinction between hypotheses and assumptions was intuitively appealing, but of course all assumptions could be said to be hypotheses in a sense. However, I think most scientists would agree that a useful assumption has definitive characteristics, even if it’s difficult to pin down exactly what those are. I think they’d also agree that counter factual prepositions about the mechanics of a system are not useful assumptions. So what are?
At their heart, assumptions are intended to simplify analysis – this is an oft-used defence of economists. But the crucial way in which assumptions are able to do this is by eliminating a specific complication. Of course, this alone is not a sufficient condition. Assumptions also need to have a clear impact on the analysis, too, so we can be sure what happens when they are relaxed.
How many economic assumptions meet these two criteria?
Firms equating marginal cost to marginal revenue certainly doesn’t, as it’s a preposition about the nature of the firm, rather than an assumption that simplifies the nature of the problem – in fact, cost-plus pricing is far easier to calculate and also appears to be used far more widely used.
Perfect information can’t be said to eliminate a specific complication – it’s simplifying in a sense, but it potentially ‘simplifies’ the analysis to the point of undermining it, hence creating its own complications (you’d eliminate most real-world firms). Analysis is entirely possible without this assumption – ‘Schumpeterian’ economics uses imperfect information to its advantage.
Rational self maximisation, on the other hand, is a good example of an assumption that is defensible, as it allows us to simplify how people make decisions and has clear implications. Furthermore, it can easily be modified to include behavioural characteristics such as loss aversion (though economists seem unwilling to do this any time soon).
I stand by the idea that assumptions are an appropriate target for criticising economics, and feel this is a much more coherent and useful definition of what makes a good or bad assumption.