Posts Tagged Methodology

How Economics Sees Reality

Something has been bothering me about the way evidence is (sometimes) used in economics and econometrics:  theories are assumed throughout interpretation of the data. The result is that it’s hard to end up questioning the model being used.

Let me give some examples. The delightful fellas at econjobrumours once disputed my argument that supply curves are flat or slope downward by noting that, yes, Virginia, in conditions where firms have market power (high demand, drought pricing) prices tend to go up. Apparently this “simple, empirical point” suffices to refute the idea that supply curves do anything but slope upward. But this is not true. After all, “supply curves slope downward/upward/wiggle around all over the place” is not an empirical statement. It is an interpretation of empirical evidence that also hinges on the relevance of the theoretical concept of the supply curve itself. In fact, the evidence, taken as whole, actually suggests that the demand-supply framework is at best incomplete.

This is because we have two major pieces of evidence on this matter: higher demand/more market power increases price, and firms face constant or increasing returns to scale. These are contradictory when interpreted within the demand-supply framework, as they imply that the supply curve slopes in different directions. However, if we used a different model – say, added a third term for ‘market power’, or a Kaleckian cost plus model, where the mark up was a function of the “degree of monopoly”, that would no longer be the case. The rising supply curve rests on the idea that increasing prices reflect increasing costs, and therefore cannot incorporate these possibilities.

Similarly, many empirical econometric papers use the neoclassical production function, (recent one here) which states that output is derived from the labour and capital, plus a few parameters attached to the variables, as a way to interpret the data. However, this again requires that we assume capital and labour, and the parameters attached to them, are meaningful, and that the data reflect their properties rather than something else. For example, the volume of labour employed moving a certain way only implies something about the ‘elasticity of substitution’ (the rate at which firms substitute between labour and capital) if you assume that there is an elasticity of substitution. However, the real-world ‘lumpiness‘ of production may mean this is not the case, at least not in the smooth, differentiable way assumed by neoclassical theory.

Assuming such concepts when looking at data means that economics can become a game of ‘label the residual‘, despite the various problems associated with the variables, concepts and parameters used. Indeed, Anwar Shaikh once pointed out that the seeming consistency between the Cobb-Douglas production function and the data was essentially tautological, and so using the function to interpret any data, even the word “humbug” on a graph, would seem to confirm the propositions of the theory, simply because they follow directly from the way it is set up.

Joan Robinson made this basic point, albeit more strongly, concerning utility functions: we assume people are optimising utility, then fit whatever behaviour we observe into said utility function. In other words, we risk making the entire exercise “impregnably circular” (unless we extract some falsifiable propositions from it, that is). Frances Wooley’s admittedly self-indulgent playing around with utility functions and the concept of paternalism seems to demonstrate this point nicely.

Now, this problem is, to a certain extent, observed in all sciences – we must assume ‘mass’ is a meaningful concept to use Newton’s Laws, and so forth. However, in economics, properties are much harder to pin down, and so it seems to me that we must be more careful when making statements about them. Plus, in the murky world of statistics, we can lose sight of the fact that we are merely making tautological statements or running into problems of causality.

The economist might now ask how we would even begin to interpret the medley of data at our disposal without theory. Well, to make another tired science analogy, the advancement of science has often not resulted from superior ‘predictions’, but on identifying a closer representation of how the world works: the go-to example of this is Ptolemy, which made superior predictions to its rival but was still wrong. My answer is therefore the same as it has always been: economists need to make better use of case studies and experiments. If we find out what’s actually going on underneath the data, we can use this to establish causal connections before interpreting it. This way, we can avoid problems of circularity, tautologies, and of trapping ourselves within a particular model.

, , , ,

38 Comments

The DSGE Dance

Something about the way economists construct their models doesn’t sit right.

Economic models are often acknowledged to be unrealistic, and Friedmanite ‘assumptions don’t matter‘ style arguments are used to justify this approach. The result is that internal mechanics aren’t really closely examined. However, when it suits them, economists are prepared to hold up internal mechanics to empirical verification – usually in order to preserve key properties and mathematical relevance. The result is that models are constructed in such a way that, instead of trying to explain how the economy works, they deliberately avoid both difficult empirical and difficult logical questions. This is particularly noticeable with the Dynamic Stochastic General Equilibrium (DSGE) models that are commonly employed in macroeconomics.

Here’s a brief overview of how DSGE models work: the economy is assumed to consist of various optimising agents: firms, households, a central bank and so forth. The behaviour of these agents is specified by a system of equations, which is then solved to give the time path of the economy: inflation, unemployment, growth and so forth. Agents usually have rational expectations, and goods markets tend to clear (supply equals demand), though various ‘frictions’ may get in the way of this. Each DSGE model will usually focus on one or two ‘frictions’ to try and isolate key causal links in the economy.

Let me also say that I am approaching this issue tentatively, as I in no way claim to have an in depth understanding of the mathematics used in DSGE models. But then, this isn’t really the issue: if somebody objects to utility as a concept, they don’t need to be able to solve a consumer optimisation problem; if someone objects to the idea that technology shocks cause recessions, they don’t need to be able to solve an RBC model. To use a tired analogy, I know nothing of the maths of epicycles, but I know it is an inaccurate description of planetary rotation. While there is every possibility I’m wrong about the DSGE approach, that possibility doesn’t rest on the mathematics.

Perverse properties?

DSGE has been around for a while, and along the line several ‘conundrums’ or inconsistencies have been discovered that could potentially undermine the approach. There are two main examples of this, both of which have similar implications: the possibility of multiple equilibria and therefore indeterminacy. I’ll go over them briefly, although won’t get into the details.

The first example is the Sonnenschein-Mandel-Debreu (SMD) Theorem. Broadly speaking, this states that although we can derive strictly downward sloping demand curves from individually optimising agents, once we aggregate up to the whole economy, the interaction between agents and resultant emergent properties mean that demand curves could have any shape. This creates the possibility of multiple equilibria, so logically the system could end up in any number of places. The SMD condition is sometimes known as the ‘anything goes’ theorem, as it implies that an economy in general equilibrium could potentially exhibit all sorts of behaviour.

The second example is capital reswitching, the possibility of which was demonstrated by Piero Sraffa in his Magnum opus Production of Commodities by Means of CommoditiesThe basic lesson is that the value of capital changes as the distribution (between profits and wages) changes, which means that one method of production can be profitable at both low and high rates of interest, while another is profitable in between. This is in contrast to the neoclassical approach, which suggests that the capital invested will increase (decrease) as the interest rate decreases (increases). The result is a non-linear relationship, and therefore the possibility of multiple equilibria.

That these issues could potentially cause problems is well known, but economists don’t see it as a problem. Here is an anonymous quote on the matter:

We’ve known for a long time one can construct GE models with perverse properties, but the logical possibility speaks nothing about empirical relevance. All these criticisms prove is that we cannot guarantee some properties hold a priori – but that’s not what we claim anyway, since we’re real economists, not austrian charlatans. Chanting that sole logical possibility of counterexamples by itself destroys large portions of economic theory is just idiotic.

As it happens, I agree: based on available evidence, neither reswitching nor the SMD theorem are empirically relevant. For everyday goods, it is reasonable to suppose that demand will rise as price falls, and vice versa.  Firms also rarely switch their techniques in the real world (though reswitching isn’t the main takeaway of the capital debates). So the perspective expressed above seems reasonable – that is, until we stop and consider the nature of DSGE models as a whole.

For the fact is that DSGE models themselves are not “empirically relevant”. They assume that agents are optimising, that markets tend to clear, that the economy is an equilibrium time path. They use ‘log linearisation’, a method which doesn’t even pretend to do anything other make the equations easier to solve by forcibly eliminating the possibility of multiple equilibria. On top of this, they generally display poor empirical corroboration. Overall, the DSGE approach is structured toward preserving the use of microfoundations, while at the same time invoking various – often unrealistic – processes in order to generate something resembling dynamic behaviour.

Economists tacitly acknowledge this, as they will usually say that they use this type of model to highlight one or two key mechanics, rather than to attempt to build a comprehensive model of the economy. Ask an economist if people really maximise utility; if the economy is in equilibrium; if markets clear, and they will likely answer “no, but it’s a simplification, designed to highlight problem x”. Yet when questioned about some of the more surreal logical consequences of all of the ‘simplifications’ made, economists will appeal to the real world. This is not a coherent perspective.

Some methodology

Neoclassical economics uses an ‘axiomatic deductive‘ approach, attempting to logically deduce theories from basic axioms about individual choice under scarcity. Economists have a stock of reasons to do this: it is ‘rigorous'; it bases models on policy invariant parameters; it incorporates the fact that the economy ultimately consists of agents consciously making decisions, etc. If you were to suggest internal mechanics based on simple empirical observations, conventional macroeconomists would likely reject your approach.

Modern DSGE models are constructed using these types of axioms, in such a way that they avoid logical conundrums like SMD conditions and reswitching. This allows macroeconomists to draw clear mathematical implications from their models, while the assumptions are justified on the grounds of empiricism: crazily shaped demand curves and technique switching are not often observed, so we’ll leave them out. Yet the model as a whole has very little to do with empiricism, and economists rarely claim otherwise. What we end up with is a clearly unrealistic model, constructed not in the name of empirical relevance or logical consistency, but in the name of preserving key conclusions and mathematical tractability. How exactly can we say this type of modelling informs us about how the economy works? This selective methodology has all the marks of Imre Lakatos’ degenerative research program.

A consequence of this methodological ‘dance’ is that it can be difficult to draw conclusions about which DSGE models are potentially sound. One example of this came from the blogosphere, via Noah Smith. Though Noah has previously criticised DSGE models, he recently noted – approvingly – that there exists a DSGE model that is quite consistent with the behaviour of key economic variables during the financial crisis. This increased my respect for DSGE somewhat, but my immediate conclusion still wasn’t “great! That model is my new mainstay”. After all, so many DSGE models exist that it’s highly probable that some simplistic curve fitting would make one seem plausible. Instead, I was concerned with what’s going on under the bonnet of the model – is it representative of the actual behaviour of the economy?

Sadly, the answer is no. Said DSGE model includes many unrealistic mechanics: most of the key behaviour appears to be determined by exogenous ‘shocks’  to risk, investment, productivity etc without any explanation. This includes the oft-mocked ‘Calvo fairy’, which imitates sticky prices by assigning a probability to firms randomly changing their prices at any given point. Presumably, this behaviour is justified on the grounds that all models are unrealistic in one way or another. But if we have constructed the model to avoid key problems – such as SMD and reswitching, or by log-linearising it – on the grounds that the problems are unrealistic, how can we justify using something as blatantly unrealistic as the Calvo fairy? Either we shed a harsh light on all internal mechanics, or on none.

Hence, even though the shoe superficially fits this DSGE model, I know that I’d be incredibly reluctant to use it if I were working at a Central Bank. This is one of the reasons why I think Steve Keen’s model – which Noah Smith has chastised – is superior: it may not exhibit behaviour that closely mirrors the path of the global economy from 2008-12, but it exhibits similar volatility, and the internal mechanics match up far more nicely than many (every?) neoclassical model. It seems to me that understanding key indicators and causal mechanisms is a far more modest, and credible, claim than being able to predict the quarter-by-quarter movement of GDP. Again, if I were ‘in charge’, I’d take the basic Keensian lesson that private debt is key to understanding crises over DSGE any day.

I am aware that DSGE and macro are only a small part of economics, and many economists agree that DSGE – at least in its current form – is yielding no fruit (although these same economists may still be hostile to outside criticism). Nevertheless, I wonder if this problem extends to other areas of economics, as economists can sometimes seem less concerned with explaining economic phenomena than with utilising their preferred approach. I believe internal mechanics are important, and if economists agree, they should expose every aspect of their theories to empirical verification, rather merely those areas which will protect their core conclusions.

, , ,

69 Comments

Economists and the ‘As If’ Argument

Many economists will admit that their models are not, and do not resemble, the real world. Nevertheless, when pushed on this obvious problem, they will assert that reality behaves as if their theories are true. I’m not sure where this puts their theories in terms of falsifiability, but there you have it. The problem I want to highlight here is that, in many ways, the conditions in which economic assumptions are fulfilled are not interesting at all and therefore unworthy of study.

To illustrate this, consider Milton Friedman’s famous exposition of the as if argument. He used the analogy of a snooker player who does not know the geometry of the shots they make, but behaves in close approximation to how they would if they did make the appropriate calculations. We could therefore model the snooker player’s game by using such equations, even though this wouldn’t strictly describe the mechanics of the game.

There is an obvious problem with Friedman’s snooker player analogy: the only reason a snooker game is interesting (in the loosest sense of the word, to be sure) is that players play imperfectly. Were snooker players to calculate everything perfectly, there would be no game; the person who went first would pot every ball and win. Hence, the imperfections are what makes the game interesting, and we must examine the actual processes the player uses to make decisions if we want a realistic model of their play. Something similar could be said for social sciences. The only time someone’s – or society’s – behaviour is really interesting is when it is degenerative,  self destructive, irrational. If everyone followed utility functions and maximised their happiness making perfectly fungible trade offs between options on which they had all available information, there would be no economic problem to speak of. The ‘deviations’ are in many ways what makes the study of economics worthwhile.

I am not the first person to recognise the flaw in Friedman’s snooker player analogy. Paul Krugman makes a similar argument in his book Peddling Prosperity. He argues that tiny deviations from rationality – say, a family not bothering to maximise their expenditure after a small tax cut because it’s not worth the time and effort – can lead to massive deviations from an economic theory. The aforementioned example completely invalidates Ricardian Equivalence. Similarly, within standard economic theory, downward wage stickiness opens up a role for monetary and fiscal policy where before there was none.

If such small ‘deviations’ from the ‘ideal’ create such significant effects, what is to be said of other, more significant ‘deviations’? Ones such as how the banking system works; how firms price; behavioural quirks; the fact that marginal products cannot be well-defined; the fact that capital can move across borders, etc etc. These completely undermine the theories upon which economists base their proclamations against the minimum wage, or for NGDP targeting, or for free trade. (Fun homework: match up the policy prescriptions mentioned with the relevant faulty assumptions).

I’ll grant that a lot of contemporary economics involves investigating areas where an assumption – rationality, perfect information, homogeneous agents – is violated. But usually this is only done one at a time, preserving the other assumptions. However, if almost every assumption is always violated, and if each violation has surprisingly large consequences, then practically any theory which retains any of the faulty assumptions will be wildly off track. Consequently, I would suggest that rather than modelling one ‘friction’ at a time, the ‘ideal’ should be dropped completely. Theories could be built from basic empirical observations instead of false assumptions.

I’m actually not entirely happy with this argument, because it implies that the economy would behave ‘well’ if everyone behaved according to economist’s ideals. All too often this can mean economists end up disparaging real people for not conforming to their theories, as Giles Saint-Paul did in his defence of economics post-crisis. The fact is that even if the world did behave according to the (impossible) neoclassical ‘ideal’, there would still be problems, such as business cycles, due to emergent properties of individually optimal behaviour. In any case, economists should be wary of the as if argument even without accepting my crazy heterodox position.

The fact is that reality doesn’t behave ‘as if’ it is economic theory. Reality behaves how reality behaves, and science is supposed to be geared toward modelling this as closely as possible. Insofar as we might rest on a counterfactual, it is only intended when we don’t know how the system actually works. Once we do know how the system works – and in economics, we do, as I outlined above – economists who resist altering their long-outdated heuristics risk avoiding important questions about the economy.

, , ,

34 Comments

Against Friedman: Why Assumptions Matter

I have previously discussed Milton Friedman’s infamous 1953 essay, ‘The Methodology of Positive Economics.’ The basic argument of Friedman’s essay is the unrealism of a theory’s assumptions should not matter; what matters are the predictions made by the theory. A truly realistic economic theory would have to incorporate so many aspects of humanity that it would be impractical or computationally impossible to do so. Hence, we must make simplifications, and cross check the models against the evidence to see if we are close enough to the truth. The internal details of the models, as long as they are consistent, are of little importance.

The essay, or some variant of it, is a fallback for economists when questioned about the assumptions of their models. Even though most economists would not endorse a strong interpretation of Friedman’s essay, I often come across the defence ‘it’s just an abstraction, all models are wrong’ if I question, say, perfect competition, utility, or equilibrium. I summarise the arguments against Friedman’s position below.

The first problem with Friedman’s stance is that it requires a rigorous, empirically driven methodology that is willing to abandon theories as soon as they are shown to be inaccurate enough. Is this really possible in economics? I recall that, during an engineering class, my lecturer introduced us to the ‘perfect gas.’ He said it was unrealistic  but showed us that it gave results accurate to 3 or 4 decimal places. Is anyone aware of econometrics papers which offer this degree of certainty and accuracy? In my opinion, the fundamental lack of accuracy inherent in social science shows that economists should be more concerned about what is actually going on inside their theories, since they are less liable to spot mistakes through pure prediction. Even if we are willing to tolerate a higher margin of error in economics, results are always contested and you can find papers claiming each issue either way.

The second problem with a ‘pure prediction’ approach to modelling is that, at any time, different theories or systems might exhibit the same behaviour, despite different underlying mechanics. That is: two different models might make the same predictions, and Friedman’s methodology has no way of dealing with this.

There are two obvious examples of this in economics. The first is the DSGE models used by central banks and economists during the ‘Great Moderation,’ which predicted the stable behaviour exhibited by the economy. However, Steve Keen’s Minsky Model also exhibits relative stability for a period, before being followed by a crash. Before the crash took place, there would have been no way of knowing which model was correct, except by looking at internal mechanics.

Another example is the Efficient Market Hypothesis. This predicts that it is hard to ‘beat the market’ – a prediction that, due to its obvious truth, partially explains the theory’s staying power. However, other theories also predict that the market will be hard to beat, either for different reasons or a combination of reasons, including some similar to those in the EMH. Again, we must do something that is anathema to Friedman: look at what is going on under the bonnet to understand which theory is correct.

The third problem is the one I initially honed in on: the vagueness of Friedman’s definition of ‘assumptions,’ and how this compares to those used in science. This found its best elucidation with the philosopher Alan Musgrave. Musgrave argued that assumptions have clear-if unspoken-definitions within science. There are negligibility assumptions, which eliminate a known variable(s) (a closed economy is a good example, because it eliminates imports/exports and capital flows). There are domain assumptions, for which the theory is only true as long as the assumption holds (oligopoly theory is only true for oligopolies).

There are then heuristic assumptions, which can be something of a ‘fudge;’ a counterfactual model of the system (firms equating MC to MR is a good example of this). However, these are often used for pedagogical purposes and dropped before too long. Insofar as they remain, they require rigorous empirical testing, which I have not seen for the MC=MR explanation of firms. Furthermore, heuristic assumptions are only used if internal mechanics cannot be identified or modeled. In the case of firms, we do know how most firms price, and it is easy to model.

The fourth problem is related to above: Friedman is misunderstanding the purpose of science. The task of science is not merely to create a ‘black box’ that gives rise to a set of predictions, but to explain phenomena: how they arise; what role each component of a system fills; how these components interact with each other. The system is always under ongoing investigation, because we always want to know what is going on under the bonnet. Whatever the efficacy of their predictions, theories are only as good as their assumptions, and relaxing an assumption is always a positive step.

Hence, the ‘it still behaves as if it matches our theories’ mentality of economists can easily be shown to be quite absurd, for example:

Consider the following theory’s superb record for prediction about when water will freeze or boil. The theory postulates that water behaves as if there were a water devil who gets angry at 32 degrees and 212 degrees Fahrenheit and alters the chemical state accordingly to ice or to steam. In a superficial sense, the water-devil theory is successful for the immediate problem at hand. But the molecular insight that water is comprised of two molecules of hydrogen and one molecule of oxygen not only led to predictive success, but also led to “better problems” (i.e., the growth of modern chemistry).

If economists want to offer lucid explanations of the economy, they are heading down the wrong path (in fact this is something employers have complained about with economics graduates: lost in theory, little to no practical knowledge).

The fifth problem is one that is specific to social sciences, one that I touched on recently: different institutional contexts can mean economies behave differently. Without an understanding of this context, and whether it matches up with the mechanics of our models, we cannot know if the model applies or not. Just because a model has proven useful in one situation or location, it doesn’t guarantee that it will useful elsewhere, as institutional differences might render it obsolete.

The final problem, less general but important, is that certain assumptions can preclude the study of certain areas. If I suggested a model of planetary collision that had one planet, you would rightly reject the model outright. Similarly, in a world with perfect information, the function of many services that rely on knowledge-data entry, lawyers and financial advisors, for example-is nullified. There is actually good reason to believe a frictionless world such as the one at the core of neoclassicism leaves the role of many firms and entrepreneurs obsolete. Hence, we must be careful about the possibility of certain assumptions invalidating the area we are studying.

In my opinion, Friedman’s essay is incoherent even on its own terms. He does not define the word ‘assumption,’ and nor does he define the word ‘prediction.’ The incoherence of the essay can be seen in Friedman’s own examples of marginalist theories of the firm. Friedman uses his new found, supposedly evidence-driven methodology as grounds for rejecting early evidence against these theories. He is able to do this because he has not defined ‘prediction,’ and so can use it in whatever way suits his preordained conclusions. But Friedman does not even offer any testable predictions for marginalist theories of the firm. In fact, he doesn’t offer any testable predictions at all.

Friedman’s essay has economists occupying a strange methodological purgatory, where they seem unreceptive to both internal critiques of their theories, and their testable predictions. This follows directly from Friedman’s ambiguous position. My position, on the other hand, is that the use and abuse of assumptions is always something of a judgment call. Part of learning how to develop, inform and reject theories is having an eye for when your model, or another’s, has done the scientific equivalent of jumping the shark. Obviously, I believe this is the case with large areas of economics, but discussing that is beyond the scope of this post. Ultimately, economists have to change their stance on assumptions if heterodox schools have any chance of persuading them.

, , , ,

39 Comments

Economists Versus Physics

I’m not sure what it is about economics that makes both its adherents and its detractors feel the need to make constant analogies to other sciences, particularly physics, to try to justify their preferred approach. Unfortunately, this problem isn’t just a blogosphere phenomenon; it appears in every area of the field, from blogs to articles to widely read economics textbooks.

For example, not too infrequently I will see a comment on heterodox work along the lines of “Newton’s theories were debunked by Einstein but they are still taught!!!!” Being untrained in physics (past high school) myself, I am grateful to have commenters who know their stuff, and can sweep aside such silly statements. In the case of this particular argument, the fact is that when studying everyday objects, the difference between Newton’s laws, quantum mechanics and general relativity is so demonstrably, empirically tiny that they effectively give the same results.

So even though quantum mechanics teaches us that in order to measure the position of a particle you must change its momentum, and that in order to measure its momentum you must change its position, the size of these ‘changes’ on every day objects is practically immeasurable. Similarly, even though relativity teaches us that the relative speed of objects is ‘constrained’ by the universal constant, the effect on everyday velocities is too small to matter. Economists are simply unable to claim anything close to this level of precision or empirical corroboration, and perhaps they never will be, due the fact that they cannot engage in controlled experiments.

Another, more worrying example, is Greg Mankiw’s widely read Macroeconomics textbook (7th ed, p. 395), when discussing estimates of the NAIRU:

If you ask an astronomer how far a particular star is from our sun, he’ll give you a number, but it won’t be accurate. Man’s ability to measure astronomical distances is still limited. An astronomer might well take better measurements and conclude that a star is really twice or half as far away as he previously thought.

Mankiw’s suggestion astronomers have this little clue what they are doing is misleading. We are talking about people who can calculate the existence of a planet close to a distant star, based on the (relatively) tiny ‘wobble’ of said star. Astronomers have many different methods for calculating stellar distances: parallax, red shift, luminosity; and these methods can be used and cross-checked against one another. As you will see from the parallax link, there are also in-built, estimable errors in their calculations, which can help them straying too far off the mark.

While it is true that at large distances, luminosity can be hard to interpret (a star may be close and dim, or bright and far away) Mankiw is mostly wrong. Astronomers still make many, largely accurate predictions, while economist’s predictions are at best contested and uncertain, or worse, incorrect. The very worst models are unfalsifiable, such as the NAIRU Mankiw is defending, which seems to move around so much that it is meaningless.

Another example is a classic case of economists misunderstanding the use of assumptions. This is from Jehle and Reny’s textbook, Advanced Microeconomics (3rd ed, preface XVI):

In the physical world, there is ‘no such thing’ as a frictionless plane or a perfect vacuum.

Perhaps not, but all these assumptions do is eliminate a known mathematical variable. This is not the same as positing an imaginary substance (utility) just so that mathematics can be used; or assuming that decision makers obey axioms which have been shown to be false time and time again; or basing everything on the impossible fantasy of perfect competition, which the authors go on to do all at once. These assumptions cannot be said to eliminate a variable or collection of variables; neither can it be said that, despite their unrealism, they display a remarkable consistency with the available evidence.

Even if we accept the premise that these assumptions are merely ‘simplifying,’ the fact remains that engineers or physicists would not be sent into the real world without friction in their models, because such models would be useless - in fact, in my own experience, friction is introduced in the first semester. Jehle and Reny do go on to suggest that one should always adopt a critical eye toward their theories, but this is simply not enough for a textbook that calls itself ‘advanced.’ At this level  such blatant unrealism should be a thing of the past, or just never have been used at all.

Economics is a young science, so it is natural that, in search of sure footing, people draw from the well respected, well grounded discipline of physics. However, not only do such analogies typically demonstrate a largely superficial understanding of physics, but since the subjects are different, analogies are often stretched so far that they fail. Analogies to other sciences can be useful to check one’s logic, or as illuminating parables. However, misguided appeals to and applications of other models are not sufficient to justify economist’s own approach, which, like other sciences (!), should stand or fall on its own merits.

, , , ,

37 Comments

Institutions and Economics

Recently, I’ve been reading a lot from the school of institutional economics. Consequently, I have noticed another problem with the way economists approach theory and evidence: the lack of institutional considerations. This can blind economists to the fact that they may be studying entirely different phenomenon due to differences between countries, periods of history, companies, genders, cultures and much more.

The standard procedure of economists is to derive a model ‘rigorously’ based on a set of assumptions or axioms. Economists, unlike physicists, cannot perform controlled experiments in order to verify these models; instead, empirical corroboration entails the use of econometrics to verify predictions. Economists must rely on collections of data, sometimes from disparate sources, and try to ‘correct’ these collections of data for said disparities. They then perform regressions in an attempt to isolate the relationship between two variables, and cautiously interpret the results. However, the problem with this approach is that institutional differences could mean that some of the data in question are simply irrelevant, whether or not they disagree with the predictions of the theory in question. 

Problems with this Methodology

It appears that underlying the methodology used by economists is a search for unifying principles that can be applied to all economies across space and time. Both the neoclassical and heterodox schools reflect a discipline aiming to isolate the ‘true’ mechanics of the economy and build a model around it. The mentality often seems to be that, if only we could isolate these true mechanics, we’d be able to understand the economy and make informed policy decisions based on our ideal framework. I’m sure many economists would agree that the institutional, legal, and cultural contexts are not the same for all economies. However, many economic models and the economist’s rhetoric reflect a discipline looking to uncover an equivalent of physical laws. Indeed, Larry Summers went so far as to claim that “the laws of economics are like the laws of engineering. One set of laws works everywhere.”

Even though most rational minds would disagree with Larry Summers, I find there is a tendency among economists to imagine that the institutional, legal, and cultural contexts are viewed as ‘constraints’ against which the ‘underlying mechanics’ of the economy are continually pushing. However, there is good reason to believe that the ‘real’ mechanics of the economy are determined by the context in which the economy operates, rather than said context merely influencing the economy exogenously. Here are some historic and contemporary examples to illustrate my point.

Industrialisation: the US versus England

English firms were fairly small during the industrial revolution. For reasons beyond the scope of this blog post, firms typically took it upon themselves to educate and train new employees on the job. Such a system diminishes the need for state education, at least from a labour market standpoint, and it wasn’t until the late 19th century that public education was finally established, by which time England was industrialised and the old system was becoming obsolete. In contrast, the USA followed a different path. During the growth period of the US, firms generally emphasised large production lines, and had a more ‘flexible’ approach to employment. Such an approach required that firms could rely on the competence of the average worker, and over the course of the US industrial revolution state education increased substantially, reaching something approximating a fully public system at around the same time as England, even though England was much later in its development phase. Both strategies successfully industrialised their countries; both presented different needs from a policy perspective. But using a single model to inform policy in these two countries would clearly be a mistake.

A similar contrast can be seen with Denmark and Japan. Historically, Japan has had a policy of lifelong employment, which means a majority of workers are, well, employed for life (the model may be waning due to the effects of the lost decade, but it was robust during Japan’s impressive industrialisation period). What would be the effect of restrictions on hiring and firing with such a model? It’s highly unlikely there would be much effect; in fact, the model itself is partly based on such regulations. But what if similar restrictions were applied to Denmark’s dynamic ‘flexicurity‘ model, in which hiring and firing is incredibly easy but there are strong social safety nets? I expect it would cause a lot of problems for employers and employees alike, as Danish firm’s strategies are built around being able to gain and shed workers quickly. On top of that, the safety net makes workers more willing to accept such treatment, as well as having obvious humanitarian attractions.

Again, though these two models are different – almost diametrically opposed, in fact – both have coped with recessions relatively well (in terms of unemployment). The countries simply have different institutions that operate under different mechanics, and no model could capture both (feel free to read that as a challenge). Despite this, Japan has recently enacted some ‘neoliberal’ reforms, perhaps based on the mistaken belief that they need to ‘free up’ the ‘underlying’ mechanics of the economy. Time will tell whether or not this was a smart move.

The Scandinavian Ideal

Apart from labour markets, there is another good example of interdependent institutions, laws and culture: the oft-cited Sweden. Both free marketeers and leftists like to hold Sweden up as an example of their ideas in action. “Look at the vast redistribution, unions and public goods!” Is the cry of the leftists. Meanwhile, the rightists will assert that beneath such institutions lies a relatively light touch, ‘neoliberal’ regulatory structure. In any many ways both are right; but in many more ways they are both wrong. Both approaches take the economy of Sweden and suggest that due to X, Y or Z policy, it is the way to go. But neither appreciate how the institutions identified by both fit together.

Sweden is historically a high-trust society and as such regulation is relatively simple. Even contract law is far less complex than that you will find in the UK or the States. Many businesses do something akin to ‘self regulation,’ reporting their own data to government agencies. Similarly, while it is questionable whether the generous welfare state is a cause of the trust, it is not unreasonable to suggest that the two are complementary. Furthermore, as in the case of Denmark, generous safety nets go well with light regulation in terms of dynamism. The approach has serious attractions, but only if the two institutions are combined: furthermore, it may well be the case that trust is a necessary condition for both of these institutions in the first place. Once more it is clear that certain historical circumstances have given rise to a specific set of ‘optimal’ policies that could not be applied elsewhere.

So if we take data points from between such disparate countries, is it really meaningful to try and ‘adjust’ them for this type of difference? What we are studying are economies with very different underlying mechanics. To aggregate over them and take the average result is to reduce the data to meaninglessness. What is needed is a historical, institutional perspective that understands how different aspects of the economy fit together, and how the economy fits into the background of politics, history, culture (not to mention to environment – for example, on an island country, even a corner shop can be a monopoly).

What is best for an economy will depend on initial conditions and current institutions. These institutions are not ‘artificial’ impositions on the underlying economy; they are inevitable political decisions which have been born out of specific historical context, and hopefully fit the culture of the nation in question. It would be at best costly and destructive, and at worst basically impossible, to uproot these institutions in search of some ideal. As such, any discussion of economic policy must proceed based on acknowledgment of the mechanics created by different institutions.

Much of what I’m saying isn’t new at all. In fairness, most empirical economic papers are careful about announcing they have found surefire causal links. And there might be new techniques in econometrics that attempt to deal with the problems in the methodology I outlined above. Furthermore, I am not suggesting economists are not at all concerned with institutions or history: development economists and Industrial Organisation economists speak of them frequently. Nevertheless, I believe the institutional considerations I described above create a clear methodological problem for large amount of economic theory, particularly macro.

This is because institutional considerations are a good reason that social scientists should be even more concerned about assumptions and real world mechanics than the physical sciences, and therefore that economists should be highly concerned with the historical, institutional and legal context of the economies they are studying. Such considerations are another nail in the coffin of Milton Friedman’s methodology, which posits that abstract models based on “unrealistic” assumptions are the appropriate approach to economic theory. Such an approach cannot even begin to comprehend institutional differences, and as such, applying any one theory – or group of theories – to every economy is bound to cause problems.

, , , ,

32 Comments

More Thoughts on Austrian Theory

A while back I wrote a short post on why I reject Austrian theories of the business cycle (ABCT). Austrians were not impressed. I still retain similar objections, though over time I have realised there are more reasonable adherents of the Austrian school (though being reasonable basically forces them to conclude demand-side recessions are a possibility). This post will hopefully be more comprehensive than my previous one, but again is only based on a few major observations/objections, and will echo some of my previous comments.

I have said a few times that I see Austrian economics as part of the marginalist tradition (as did Mises). Since I am critical of this tradition, a part of my objection is the application of the same criticisms to Austrians: the idea that ‘factors of production’ are rewarded according to their productivities is subject to all sorts of critiques; similarly, the Austrian treatment of capital is sometimes vulnerable to the problems  highlighted in the Capital Controversies. However, since I have already posted on this, and will likely do so again in the future, I will avoid this issue and instead criticise the Austrian school directly.

This post will be two pronged: first, I will explore the Austrian methodology in general; specifically, praxeology. Second, I will ask whether the theoretical implications of Austrian economics -regardless of praxeology – can be sustained.

Praxeology

Praxeology is the notion that economic theory can be built up a priori from the action axiom, or, as Mises stated it:

Human action is purposeful behaviour  Or we may say: Action is will put into operation and transformed into an agency, is aiming at ends and goals, is the ego’s meaningful response to stimuli and to the conditions of its environment, is a person’s conscious adjustment to the state of the universe that determines his life. Such paraphrases may clarify the definition given and prevent possible misinterpretations. But the definition itself is adequate and does not need complement of commentary.

It is worth stating that Hayek, and other Austrians, probably rejected this, at least as a rigid rule. So the critique applies mostly to Miseans. I have two points to make about it:

First, I think the axiom itself is flawed. While it is fair to say human action can be a purposeful response to stimuli in order to obtain certain ends, that is not the same as saying that this is always the case. Action can be purposeful; it can also be knee-jerk, confused, accidental, arbitrary or even meaningless. Sometimes the action itself is the end. This poses a problem for the ‘try to disprove the action axiom‘ test, which asserts that by trying to disprove the axiom you validate it through your purposeful behaviour (yes, it is an intellectual ‘I know you are but what am I?’). But all this does it show that action can be purposeful. By trying to disprove it, I am acting purposefully, but this doesn’t mean all of my actions are purposeful.

Second, even if we accept the action axiom, we run into problems. It’s simply not at all clear how to get from a tautological statement to elaborate theories of the central bank. Blogger ‘Lord Keynes’  has discussed this – it’s clear that Mises had to introduce other assumptions and propositions to build his theory. Mises even admitted this directly with the disutility of labour:

The disutility of labor is not of a categorical and aprioristic character. We can without contradiction think of a world in which labor does not cause uneasiness, and we can depict the state of affairs prevailing in such a world.

Ultimately, I see no need to invoke praxeology when talking about theory. We can discuss the logic of whether low interest rates cause bubbles, or look at the evidence. We can examine other propositions of Austrian theory. But why do we need the human action axiom? The substantive theory is  where we must turn to determine whether or not Austrians are correct.

The Natural Rate of Interest

Hayek’s original theory of the business cycle, first fully expounded in Prices and Production, rested on an equilibrium between saving and borrowing different goods.* The market would set the equilibrium rate at which different goods were borrowed, meaning the savings were matched to investment and there was no excess credit expansion. However, Piero Sraffa – in what is widely regarded as a devastating review of the book – observed that in a monetary economy, the money rate of interest would be an aggregate of all the ‘natural rates’ between different goods. Hence  there was no reason to believe it would correspond to an equilibrium between every, or perhaps even any, particular good.

This issue comes up again and again, and while the overwhelming majority of Austrians appear to have conceded Sraffa’s criticism that the is no natural rate of interest. However, many seem to think it doesn’t matter – and this is not unique to Austrians. For me, the natural rate of interest matters: if there is no ‘natural’ or ‘correct’ rate of interest, how do we measure a deviation from the ideal?

It is true that fluctuations in the base rate do affect house prices – being directly linked to mortgages as they are – but nevertheless, Austrian theory doesn’t seem to deal well with housing bubbles. This is because they generally involve people continually buying and selling the same houses to each other and hence have small amounts of capital misallocation; in fact, there is a shortage of housing in many developed countries, while existing houses remain highly priced. This doesn’t make sense under an Austrian framework, which would require overinvestment in houses and hence liquidation of existing surplus stocks.

For me, interest rates are nothing special. They represent a cost for businesses, to be factored into their decision-making along with other costs. As Joseph Stiglitz says, is it a problem when business’ supply costs are too low? Does it lead them to expand too much? It seems to me that when banks are lending money for the wrong things, it’s a regulatory rather than monetary problem (insofar as it is a monetary problem, I would say it’s caused by high interest rates, but that’s for another time).

Furthermore, this ‘naturalistic’ problem with Austrianism isn’t limited to the rate of interest. There always seems to be some supposedly neutral laissez-faire, baseline state, which is never defined. Surely limited liability laws affect the decisions of businesses? What about the practical problems with property rights and contract law: the limited resources of the legal system (and hence dismissal of small cases); implicit contracts; rental laws, car crash liabilities, insurance claims and much more? All of these will contain somewhat arbitrary decisions, and all will impact the workings of a capitalist economy, possibly leading to capital misallocation. Overall, it is difficult to find a solid foundation for the supposedly ‘natural’ baseline on which Austrian theory seems to be built.

Overall, I remain unconvinced. I expect Ludwig Lachmann and similar economists are well worth reading, particularly for their stances on expectations and entrepreneurial strategies. But nothing I’ve seen from ‘mainstream’ Austrians has yet convinced me that it is worth delving into either 1000 page tomes by Mises or Rothbard, or practically unreadable (economic) works by Hayek, in order to try to further my understanding of their theories. There are just too many issues – conceptual, logical or evidential – with what I know so far.

But then, the internet is surely the place for Austrians to prove me wrong.

*It is worth noting that Austrians appear to rely on an exogenous money model with their talk of equilibrating savings and investment, and their idea that credit expansion results from central bank expansion. As I have documented, this is not how banking works. However, some Austrians have incorporated this insight, while others are against FRB altogether, so it’s not a problem for all of them.

, , , , ,

18 Comments

Why Prefer Preferences?

Nick Rowe offers a summary of the Cambridge Capital Controversies that, though it is tongue in cheek and should not be taken too seriously, substantively leaves a lot to be desired. He states that the debate started because “some economists in Cambridge UK wanted to explain prices without talking about preferences.” This is false – the debate started because Joan Robinson and Piero Sraffa took issue with a production function that used an aggregate capital stock k, measured in £, with a marginal productivity. However, despite the faulty summary of the controversies, and to Rowe’s credit, some good discussion followed in the comments.

Sraffa built up an entire model just to critique neoclassical theory. It followed neoclassical logic, but replaced the popular measure of capital with a more consistent one: summing up the labour required to produce it, and the profit made from it. His model of capitalism started with simplistic assumptions, but increased in complexity. Within the confines of his own model, he showed several things: the distribution between wages and profits must be known before prices can be calculated; demand and supply are not an adequate explanation of prices, and the rate of interest can have non-linear effects on the nature of production. I cover this in more detail here.

Rowe’s primary criticism of Sraffa is that his model did not use preferences, which is a criticism also made by others.  But eliminating preferences is a neglibility assumption: we ignore some element of the system we are studying, in the hope that we can either add it later, or it is empirically negligible. As Matias Vernengo notes in the comments, Sraffa was deliberately trying to escape the subjective utility base of neoclassical economics in favour of the classical tradition of social and institutional norms, so he assumed preferences were given. This is just a ceteris paribus assumption, which economists usually love! In any case it turns out that preferences can be added to a Sraffian model, with many of the key insights still remaining. Indeed Vienneau’s model (and, apparently, the work of Ian Steedman, with whom I am unfamiliar) invoke utility maximisation and come to many of the same Sraffian conclusions about demand-supply being unjustified.

Rowe also criticises Sraffa’s approach because it puts production first, over the consumer sovereignty upon which neoclassical economics is built. Should preferences provide an explanation of decisions? It appears Rowe does not take seriously the ‘chicken and egg’ problem with neoclassical models – surely, production must occur first, yet models such as Arrow-Debreu take prices as a given for firms, before anything is made.

In a modern capitalist economy, it seems illogical to say that the demand for a particular good comes first, then the supply follows as firms passively try to accommodate it. If it were true, advertising wouldn’t exist, or would be incredibly limited. It is fair to say that, independently, people have a ‘preference’ (though I’d say instinct) for food, shelter, clothing, security and other creature comforts. However, demand for most goods and services beyond this is certainly generated by advertising, marketing and other exogenous factors – advertising and marketing are one of the two primary expansion constraints experienced by real world firms (the other is financing, which, incidentally, neoclassical models often assume away too, but I digress).

An alternative way to model human behaviour would be an institutional/social norm perspective: while people instinctively want to subsist, what exactly they choose to subsist on is in large part dependent on their surroundings. There is the example of tea consumption in Britain, which started as a luxury and took decades to filter down to the lower classes. Similarly, if I had been born in India, I would probably have more of a taste for spicy foods. It’s hard to deny these things are largely dependent on social surroundings, rather than individualistic consumer preferences. Similarly, Rowe’s focus on the time-preference explanation of the interest rate seems to ignore that this will be largely dependent on institutional factors such as the state of the economy.

From an individual perspective, perhaps Maslow’s Hierarachy is a useful way of understanding purchasing decisions: after people have obtained basic needs such as food and security, things they buy are to do with identity and emotion. Don’t believe me? These concepts are exactly what firms use to try to expand their market base (for a longer treatment, see Adam Curtis’ documentary). If people don’t buy products because firms associate them with ‘self actualisation,’ then firms are systemically irrational.

Overall, I don’t think there are there any cases in which we can evaluate individual’s preferences outside a social and institutional context. Sraffa considers the economy as a whole, and leaves subsequent questions about consumers to be answered later – which they have been. Conversely, putting preferences first and having firms passively accommodate their demand runs into several logical problems, and does not corroborate with what we know about both firms and people in the real world.

, , ,

29 Comments

Debunking Economics, Part VI: Assumptions, Assumptions, Assumptions

I have discussed the use and abuse of assumptions in economics a few times, making some headway but often struggling to define exactly what constitutes a ‘good’ and ‘bad’ assumption.

Chapter 8 of Steve Keen’s Debunking Economics channels a paper (it’s short, and worth reading) by the Philosopher Alan Musgrave, which distinguishes between the 3 types of assumptions: negligibility, domain and heuristic.

According to Friedman’s 1953 essay, theories are significant when they “explain much by little,” and to this end “will be found to have assumptions that are wildly unrealistic…in general, the more significant the theory, the more unrealistic the assumptions.” By distinguishing between the different types of assumption Musgrave shows how Friedman misunderstands the scientific method, and that his argument is only partially true for one type: negligibility assumptions, which we will look at first.

Negligibility

Neglibility assumptions simply eliminate a specific aspect of a system – friction, for example – when it is not significant enough to have a discernible impact. Friedman is correct to argue that these assumptions should be judged by their empirical corroboration, but he is wrong to say that they are necessarily ‘unrealistic’ – if air resistance is negligible then it is in fact realistic to assume a vacuum. I don’t regard many economic assumptions as fitting into this category, though many of the examples Friedman argues a theory would need to be ‘truly’ realistic, such as eye colour, fit the bill.

Domain

If a theory fails to corroborate with the evidence, it may be because the phenomenon under investigation does require that air resistance is taken into account. So the previous theory becomes a ‘domain’ theory, for which the conclusions only apply as long as the assumption of a vacuum applies. Contrary to Friedman, the aim of ‘domain’ assumptions is to be realistic and wide ranging, so that the theory may be useful in as many situations as possible. Many of the assumptions in economics are incredibly restrictive in this sense, such as assuming equilibrium, neutrality of money or ergodicity.

Heuristic

A heuristic assumption is a counterfactual proposition about the nature of a system, used to investigate it in the hope of moving on to something better. These can also be retained to guide students through the process of learning about the system. If a domain assumption is never true, then it may transform into a heuristic assumption, as long as there is an eye to making the theory more realistic at a later stage. The way Piero Sraffa builds up his theory of production is a good demonstration of this approach: starting with a few firms, no profit, no labour, and ending up with multiple firms with different types of capital and labour. In this sense many economic models are half-baked, in that they retain assumptions that are unrealistic for phenomena that are not ‘negligible,’ even at a high level.

Musgrave colourfully describes the evolution of scientific assumptions:

what in youth was a bold and adventurous negligibility assumption, may be reduced in middle-age to a sedate domain assumption, and decline in old-age into a mere heuristic assumption.

Musgrave is partially wrong in this formulation, in my opinion – assumptions can start out as heuristics and become domain later on, such as perfect gas or optimising bacteria. But there are always strict criteria for when the theory built on the assumption simply becomes useless, and there is always a view to discarding the heuristic when something better comes along. Economic theory tends to weave between the different types of assumptions without realizing/drawing attention to them.

Keen ironically notes assumptions obviously matter to economists – they just have to be Lucas Approved™. The reaction by many neoclassical journals to papers such as his, which do not toe the party line with assumptions, demonstrates his point effectively. He also points out that, in fairness to neoclassical economists, the hard sciences are not necessarily the humble havens they are made out to be, and to this day physicists are resistant to questioning accepted theories. However, it is true that economists seem to be more vehement in the face of contradictory evidence than anywhere else.

I see this as a case closed on Friedman’s methodology. Economists need to learn to draw attention to exactly which type of assumption they are making in order for the science to progress, else risk having no clear parameters for where a theory should be headed, and under which conditions it can be considered valid.

, , , , ,

7 Comments

On The Similarities Between Austrian and Neoclassical Economics

I have alluded to the fact that I see neoclassical and Austrians economics as broadly part of the same intellectual movement. At first, I was unable to pinpoint exactly why this was, other than the fact that they both shared a governments versus markets mentality, and the only major policy difference between neoclassical libertarians and (minarchist) Austrian libertarians was the latter’s disdain for central banking (I am also informed that Milton Friedman repudiated his support for Central Banking later in life. But didn’t he argue…eh, forget it).

Regardless, I have realised that the more substantive reason for this equivalence is that the two share the same methodology. Whilst Austrians might reject this at first glance, allow me to go through each methodological tool, as expressed by Arnsperger & Varoufakis, used by neoclassical economics and compare it to Austrian analysis:

(1) Methodological individualism. This one is not particularly controversial – both neoclassicals and Austrians build up their economic models from the behaviour of individual agents. Austrians are generally more reductionist, whilst neoclassicals are prepared to abandon it for AD/AS analysis, but the majority of neoclassical theories retain this approach.

(2) Methodological instrumentalism. This means behaviour is generally preference driven, and action is defined to attain some end state. For neoclassicals this is utility maximisation:

Economists use the term utility to describe the satisfaction or enjoyment derived from the consumption of a good or service. If we assume that consumers act rationally, this means they will choose between different goods and services so as to maximize total satisfaction or total utility.

For Austrians it does not necessarily revolve around maximising anything, but still shares the same ‘actions are aimed to achieve some end’ characteristic:

Human action is purposeful behavior. Or we may say: Action is will put into operation and transformed into an agency, is aiming at ends and goals, is the ego’s meaningful response to stimuli and to the conditions of its environment, is a person’s conscious adjustment to the state of the universe that determines his life.

In both cases the theories revolve around revealed preference – what people actually do is meaningful, and we will build our theories around that assumption.

(3) Methodological equilibration. This means that analysis asks what behaviour we should expect, given the economy is in equilibrium. This is the one most likely to be resisted by Austrians, who generally insist that they study the economy as if it is permanently evolving and in disequilibrium. However, this paper on the subject disagrees:

Mises’ understood the of market process as a series of shifting imperfect equilibria, or plain states of rest. Hayek had views similar to Mises on equilibrium, but he added in the concept of a personal state of rest to Austrian theory. Lachmann accepted the basic elements of the Mises-Hayek theory of shifting equilibrium.

Mises and Hayek’s approach of starting in equilibrium and then asking whether that equilibrium is unique and stable echoes the approach of neoclassical economics, which generally assumes equilibrium to begin with, then looks at whether the system has a tendency away from that equilibrium, towards others or to stay in the same place.

Blogger ‘Lord Keynes’ has also commented on the reliance of many Austrians on some form of equilibrium analysis, noting that Mises and Rothbard thought the economy had a long term tendency towards equilibrium, whilst Hayek used equilibrium as an epistemological starting point. LK appears to think that Lachmann did not fall into these traps, in opposition to the paper above, but I am not sufficiently well versed in Lachmann’s work to comment.

It’s reasonably uncontroversial to note that elements of the neoclassical and Austrian school have the same origins in Menger and Walras, and the Austrians originally split from the neoclassicals to pursue a different path. However, it seems they still took many of the important concepts with them when they left, and to me its clear that many of these remain today.

, , ,

24 Comments

Follow

Get every new post delivered to your Inbox.

Join 1,038 other followers