Archive for category Economics
Sometimes it seems like economist’s pet principles are applied selectively, in such a way that they attack ideas generally endorsed by the left end of the political spectrum. This isn’t to say economists themselves are ideologically inclined toward any opinion; merely, that key aspects of their own framework, and the way they present these aspects, lends itself to a more ‘right-friendly’ way of thinking.
In part, the issue is merely one of a disparity between how economists present issues to the public and how they speak to others in academia. Dani Rodrik noted this issue in his book The Globalisation Paradox. Here he describes a situation where a reporter asks an economist whether free trade is beneficial:
We can be fairly certain about the kind of response [the reporter] will get: “Oh yes, free trade is a great idea,” the economist will immediately say, possibly adding: “And those who are opposed to it either do not understand the principle of comparative advantage, or they represent the selfish interests of certain lobbies (such as labor unions).”
Rodrik then contrasts this with how such a question would be answered in the classroom:
Let [the student] pose the same question to the instructor: Is free trade good? I doubt that the question will be answered as quickly and succinctly this time around. The professor is in fact likely to be stymied and confused by the question. “What do you mean by ‘good’?” she may ask. “Good for whom?… As we will see later in this course, in most of our models free trade makes some groups better off and others worse off… But under certain conditions, and assuming we can tax the beneficiaries and compensate the losers, freer trade has the potential to increase everyone’s well-being…”
This adherence to basic, market-friendly principles over nuance can be found often in ‘pop’ economics: for example, economist Paul Krugman does it in his book Peddling Prosperity. The book is intended as an survey of nonsensical ideas from both the left and the right, remedying them both with a cold hard dose of facts, plus some basic economics. However, Krugman treats the left and right somewhat asymmetrically: with the right, he primarily opts for facts, whereas with the left, he uses economic principles
This is quite possibly because the right’s arguments, though they are taken to an extreme, have economic principles on their side, while the left’s do not. The ‘supply side’ economics that Krugman takes issue with is really just an extreme statement of the well known principle of deadweight loss, which suggests that taxes decrease output. If taxes reduce output by enough, then it logically follows that not only output, but overall revenues might fall if we raise taxes. Krugman would not question the principle, so he spends several chapters documenting evidence against the idea*.
Krugman then follows this up with a section berating the ‘strategic traders’, endorsed by Bill Clinton and others on the centre-left. Strategic trade suggested a role for government policy in promoting industry, because various clustering effects, economies of scale and positive feedback loops could mean that the initial wave of government investment could kick start an industry. As Krugman himself notes, such dynamic effects and ‘historical path dependence’ could render comparative advantage obsolete, since comparative advantage posits a more fundamental, innate reason a country produces a particular good, one that cannot be changed with policy (one that may be more applicable to agriculture).
Yet, in contrast with his section aimed at refuting the right, Krugman offers scant evidence suggesting government intervention doesn’t work. Instead, he effectively restates the theory of comparative advantage, coupled with a typical story to illustrate it. This is despite explicitly suggesting it might not be applicable in the previous chapter. When pushed, Krugman is prepared to fall back on his pro-market principles, even in areas where he knows they may not apply.
William Easterly does something similar in his book The Elusive Quest for Growth. The book is a survey of various policies than have purported to be panaceas for development, such as education, investment and population control. (As you can see, economists really love writing their “I’m an economist, here’s how it is” manifestos). Easterly finds every supposed development panacea wanting based on the available evidence, which is fine. However, occasionally he supplements his arguments with an excruciating example of ‘economic logic’ that always looks out of place.
For example, in the section on increasing availability of condoms, Easterly essentially makes the argument ‘how could people be lacking condoms? If they were, the free market would provide them!’ I am reminded of the joke about the economist who does not pick up a £10 note from the ground, because, if it were really there, somebody would already have picked it up. Easterly is a smart guy with a lot of concern for the poor, and I have a hard time believing he wouldn’t agree that a country might lack the institutions to deliver condoms, that people might lack the education to know why they’d need them, that it might conflict with their beliefs, etc. But the ease with which he can apply a pet economic principle is just too tempting, so he ignores these factors.
Another example is where Easterly asserts that population growth cannot be a problem, because “an additional person is a potential profit opportunity for a person that hires him or her” and as a result “the real wage will adjust until the demand for workers equals the supply.” It’s quite clear things don’t function this smoothly in labour markets even in developed countries; for theoretical reasons as to why, Easterly need look no further than John Maynard Keynes; failing that, modern work on labour market frictions might prove sufficient. Again we see a neat but overly simplistic principle applied when even the economist themselves surely knows better.
So it is not uncommon for economists to prefer their more ‘free market’ principles over nuance when writing for a popular audience**. But is this problem only limited to popular economics? Economists seem to think so; to them, the issue is primarily one of communication, and knowing the limits of your models. This is fine as it goes. However, there are reasons to believe this bias extends into the murky depths of academia.
In my opinion, there is one major culprit of selective application in economics, and it is one that cannot be explained by economists simplifying their work for public consumption: the Lucas Critique. The Lucas Critique suggests that adjusting policy based on observed empirical relationships from the past will alter the conditions under which these observations were generated, hence rendering the relationship obsolete.
Unfortunately, in practice, Lucas’ version of the critique seems to have been used to beat ‘Keynesians’ over the head, rather than being universally applied as a tool to further understanding. To illustrate this, here are some areas I think Lucas critique-style thinking could be applied, but hasn’t:
- Milton Friedman’s methodology. If a ‘black box’ theory corroborates well with past evidence but we aren’t entirely sure the internal mechanics are accurate, there’s no reason to believe the corroboration will hold, or to know how the mechanics of the system will change, if we change policy.
- Nominal GDP Targeting (NGDPT). This hasn’t caught on much on the left (in my opinion, for primarily ideological reasons: it’s anti-Keynesian, it partly absolves the private sector of responsibility for recessions). But it doesn’t seem to have occurred to proponents of NGDPT that we must ask if the relationship between inflation, RGDP and NGDP will break down if we try to exploit it for policy purposes. This is despite the fact that we are talking about precisely the same variables as the Phillips Curve, the primary theory to which the Lucas Critique was initially applied.
- The supposed “deep parameters” of human behaviour on which Lucas suggests we construct economic models, such as technology and preferences. For a neoclassical economist, you are born with a set of preferences and you die with them, while in many models technology is a vaguely defined exogenous parameter. Yet a single example can show that both of these things can change with policy: government investment, which is at the root of a large number of technological break throughs. These break throughs have often resulted in new products, creating preferences that otherwise wouldn’t have existed. A model with fixed, exogenous parameters for technology and preferences is therefore hugely fallible to policy changes.
The fact that the critique hasn’t been applied to these examples leads me to believe it’s often only used to preserve existing economic theory. In fact, the critique itself is really just a narrow version of the more general principle of reflexivity, noted by many before. Reflexivity is an ever-present problem that suggests an evolving relationship between policy and theory, not a principle that means we can fall back on economist’s preferred methods.
Is the Lucas Critique the only culprit? Well, I’ve found economists are generally critical of the assumptions and mechanics of heterodox models, despite appealing to Friedmanite arguments when questioned about their own. I’ve also found economists (okay, one) appeal to how businessmen really behave when defending their theories despite not paying much credence to alternative theories based on the same principle, such as cost-plus pricing. So maybe economists need to air out their theories and principles a bit, rather than simply applying them where it suits them.
Economist’s simple stories often capture some truths, which is why they will defend them to the death. But too often this becomes a matter of protecting a core set of beliefs, and being unwilling to apply them in new ways or even abandon them altogether. So economists end up deferring to their framework when it isn’t appropriate, or only interpreting it in their preferred way, particularly when they communicate their ideas to the public. The result can be that misleading conclusions about the economy remain prominent, even when economist’s own frameworks, interpreted completely, don’t necessarily imply them. Perhaps if economists were more willing to open up their theories, which can sometimes feel like something of a black box, these misinterpretations would be exposed.
**In fairness to Krugman and Easterly, these books were written a while ago, and I’m sure they have updated their positions since then. I only wish to show that economists use this tactic, not that any one economist endorses any particular position.
“Thinking like an economist” is one of those things you’ll see on the pages of every book released during the initial
attack wave of pop economics books starting around 2006. In fact, the authors of such books set out with the explicit aim to educate the average person about the basics of economics: demand and supply, comparative advantage, opportunity cost, cost-benefit analysis, externalities, and of course the most beloved mantras: ‘people respond to incentives‘ and ‘there’s no such thing as a free lunch‘.
The typical economist’s mindset is a logical, dispassionate (though not necessarily uncaring) analyst who weighs up situations and policies using basic principles, bearing in mind there are always trade offs and no perfect solutions. Economists usually weigh things up with efficiency in mind, thinking of equitability as an important but often opposed goal to efficiency, and one that should probably be considered separately (this stems from Kaldor-Hicks efficiency, which suggests that pareto optimal policies can be combined with redistribution policies to produce the best possible outcome in terms of both efficiency and equity. Sadly, in practice this means economists sometimes just advocate the former, with the proviso that the latter could happen, but don’t worry as much as they should about whether the redistribution actually does happen).
There are obviously areas where economist’s toolkit applies. Cost-benefit analyses are appropriate for business plans and plans in other organisations. Opportunity cost is relevant when keeping the weekly shop within a budget: if we buy the biscuits, we won’t have enough for the cereal bars, etc. The economic way of thinking also has unexpected applications: for example, economists have done commendable work in the field of organ donation.
However, problems with the ‘economic way of thinking’ arise under certain circumstances. This is commonly when actions have outcomes that are fundamentally unknown, or are incommensurable. What is the opportunity cost of me writing this blog post? Well, I could be writing a different blog post, but I have no idea which one my readers would prefer. That’s assuming I evaluate blogging solely in terms of one metric, like page views, which obviously isn’t true. Alternatively, I could be reading a book; perhaps I’d get an idea for a better post for that, so over the long term reading would be more fruitful. I could also be sleeping, cooking, at the pub, or any number of things, but weighing up the various trade offs and benefits of these actions ‘like an economist’ is simply not possible.
I believe there are ample examples of economists extending their economist’s toolkit beyond where it is appropriate. I will note that good economists realise the limits of their approach, and would probably not endorse the (sometimes absurd) instances of ‘economic imperialism’ I am about to present:
Politicial science. Economists extended their toolkit to political science with public choice theory, which supposes that politicians and voters are rational self-maximisers who act to further their own interests, be they power, prestige, financial gain or what have you. This found its reductio in Bryan Caplan, who suggested that voters are rationally ignorant of politics because the costs outweigh the benefits, and so economists (who are obviously right about everything) should dictate public policy. You know, like in Chile.
Fortunately, this theory is wrong. Research, the best coming from Leif Lewin, has found that politicians and voters act in what they perceive to be the general interest, not narrow self interest. People vote and act out of a sense of obligation and citizenship, not because of any cost benefit analysis they partake in. Public servants are generally public spirited and less motivated by money than those in the private sector. While special interest groups are a problem, economists are better off turning to political scientists if they want to analyse this further, who have known what I outlined above for a long time.
The environment. Some of economist’s basic tools are easily shown to be absurd when applied to environmental analysis. It is not possible to place a monetary value – economist’s go to unit – on most environment variables. How do we compare the ‘value’ of a lake with the economic costs of a carbon tax? Is there some level of carbon tax at which we would forego every lake on earth rather than apply it? How do we compare, say, the depletion of coal with a rise in the sea level? These things have many different metrics by which they can be judged. The financial metrics used by economists are surely among them, but they are only a small part of the picture.
Another problem arises when looking at possible future environmental outcomes, as probabilities are fundamentally unknowable. Some try to approach the issue of global warming and environmental catastrophe by weighing up probabilities and doing cost-benefit analyses. But how do we propose to calculate the probability of environmental disaster? We don’t have a set of earths we can ‘run’ to evaluate how often catastrophe occurs; climate models display chaotic behaviour that is highly dependent on the accuracy of initial conditions. The fact is that we simply don’t know how likely disaster is, what its impacts will be, and framing it in such a way is deeply misleading. Furthermore, even if the probabilities were known, what matters is not just the weighted relative costs and benefits, but the potential for absolute disaster. If there is a 1% chance the world will end unless we do x, we shouldn’t do a cost-benefit analysis. Instead, assuming x is feasible, we should simply do it.
The law. As Yves Smith details in ECONNED (pp. 124-126), Chicago School economists managed to persuade first legal theorists, and then those involved in the legal system itself, of the efficacy of their way of thinking, eventually forming the ‘law and economics’ school. Since this was Chicago, it will not surprise you to learn that this approach largely consisted of a focus on efficiency over, say, due process, promoted deregulation, and rejected notions of corporate social responsibility. Nor should it surprise you that the movement had a large degree of – ahem – ‘support’ from various moneyed interests.
Theoretically, I find the corporate social responsibility position to be incoherent. Empirically, it’s obvious that the framework economists had a substantial part in setting up has failed. Fraud has risen; the changes in anti-trust have not had the benefits that economists predicted; we had a financial crisis in 2008 as a result of the regulatory framework put in place. Note that this isn’t an ideological point: you can think that the regulation was too loose, too tight, or simply wrongly formulated. But in general, defending the exact thinking and framework that led to the crisis is absurd.
Economists take pride in the seeming versatility and simplicity of their framework, and they are eager to apply it to other social sciences. That economists conclusions are, to quote Keynes, “austere and often unpalatable, len[d them] virtue”, especially when contrasted with less mathematically certain social sciences, such as sociology. But oftentimes economists act to displace existing theories without really considering the existing viewpoint. And oftentimes that existing viewpoint has more to it than economists, trained as they are to see things a certain way, might perceive. Hence, economists should always careful when venturing onto new intellectual turf, as otherwise they risk missing vital insights long known to others, insights to which their framework blinds them.
Now, I suppose, is as appropriate a time as any to discuss the policies generally known as neoliberalism/free market economics: tax and spending cuts, union busting, deregulation, privatisation and free trade, and how they have fared in practice. Unsurprisingly, those on the right defend neoliberalism’s record. However, successes have been over exaggerated, while in cases of clear success, a closer look reveals policies which are anything but ‘neoliberal’. I’ll take a brief look at some countries or sets of countries which are commonly purported to show the success of these policies: the US & UK, Chile, Hong Kong & Singapore, and Scandinavia. I believe that in none of these instances do we get a clear example of neoliberal policies succeeding economically.
The US and UK had similar narratives during their transition to neoliberal policies. After a period of stagflation, a ‘strong’ politician (Ronald Reagan and Margaret Thatcher, respectively) rose who was willing to enact drastic reforms. The narrative here can be exaggerated – pro market reforms (eg deregulation under Carter) and economy-wide trends (the decline of manufacturing) preceded these two governments. Nevetherless, utilities were privatised, unions were weakened, direct taxes (mostly top tax rates and corporation taxes) were slashed, and various regulations were either cut down or replaced with a more ‘neoliberal’ model. Obviously some ‘free market’ purists will always claim it was not enough, but it was a substantial move in the neoliberal direction, and as such we should have seen clear benefits.
Economic growth under these two governments was decidedly average. If we measure from peak to peak in the business cycle to average out fluctuations, per capita growth under Thatcher comes out at 2.44% (1978-88), while Reagan comes out at 2.3% (1979-90). If we just measure the years they were in office, the respective figures are 2.05% and 2.77%. Whichever way you paint it, growth was not far from its 2.5% trend.
In fact, in both countries the ups and downs of the economy surely had more to do with monetary policy than anything else. Interest rates went as high as 17% in the UK and 19% in the US; around 1983 they had more than halved, dropping down to about 8%; following this GDP started to recover. Insofar as policy goes, the conventional story that neoliberal policies rescued their respective countries is a half truth at best. Thatcher benefited from an oil boom which helped her to fund her various preferred programs (including the Falklands War, which helped buy off discontent). Reagan’s policy of cutting taxes but increasing military spending during a recession was effectively Keynesianism. Ultimately, there is little evidence that the headline reforms were responsible for the overall performance of the economy in either country.
Singapore & Hong Kong
These two countries have certainly had impressive peformances over the past few decades, overtaking most developed countries for GDP per capita. For this reason, they are often touted as free market success stories. This is misleading in a couple of ways.
The narrative about the success of any policy in Singapore and Hong Kong is complicated by the fact that they have some obvious advantages over everywhere else, no matter their policies (within reason). First, they are port cities, which means that unless there are serious political problems, they will be a conduit for a large degree of trade no matter their economic policies. Second, they are city states, which reduces administrative and transaction costs, both in the public and private sectors. Third, Hong Kong does not have to fund a military due to protection from China, which helps to explain its low tax rates.
In any case, the two countries are anything but a paragon of the ‘free market’ in action. In Hong Kong, the government owns all of the land. In Singapore, the government owns about 60% of the land, heavily regulating its usage, while government-linked corporations produce up to 60% of GDP. Both countries have public health care, transportation and education, public housing programs and safety nets, and Singapore owns public utilities while Hong Kong regulates them tightly.
Clearly, whatever the success of these countries is caused by, it is not simply ‘free markets’.
The story painted usually painted about Chile is that it went from a poor country to one of the richest in Latin America after ‘free market’ reforms were put in place by the dictator Augusto Pinochet following the 1973 coup d’etat. What actually happened (from a policy perspective) was much more of a mixed bag, combining both neoliberal programs with long-standing state directed ones.
Key industries remained either directly in the hands of the state (such as copper and oil) or in receipt of subsidies, advice and management, and training through the government organisation CORFO (such as forestry and fishing). These state-directed industries experienced massive growth and fueled an export boom, which drove the economy for decades to come. It is true that some industries, such as banking, were privatised and deregulated, but this was far from a success: it produced a financial bubble, which collapsed in 1982, reducing GDP by 14%, back down to where it was in 1970. Only 5 out of the 19 banks that had been privatised remained, (reluctantly) bailed out by the government, which also had to reinstate capital controls and other interventions. Furthermore, once democracy was reinstated in the 1990s, governments moved leftwards and embarked in significant, successful poverty reduction programs.
This is clearly at odds with the idea of Chile as a free market success story. In fact, I’d go so far as to say that in the case of Chile, success was clearly concentrated in areas with obvious state intervention, while failures were concentrated in those without.
Scandinavian countries are a synonym for economic success, faring well in GDP per capita, but even better in overall standard of living indexes. So it is no surprises that both sides of the debate claim them as their own. The claim is more perplexing when coming from the right, however, since it requires them to effectively claim that countries which are clearly social democracies are not social democracies. It is generally asserted that beneath the high tax rates, these countries are ‘economically free’, which roughly translates as lightly regulated. So are they?
Disregarding such nonsensical indexes as Heritage and heading for the more credible OECD, we can see that Scandinavian countries have average to low strength regulatory frameworks by the standards of developed countries:
In case you were wondering, there is no clear correlation between this index and GDP growth.
While, with the exception of Sweden, the Scandinavian countries have below average regulation indexes, if this were causing their success then surely the US, UK and Spain would be doing well, too? Perhaps low regulation must be combined with a strong safety net and public services to work. More likely, the Scandinavian countries are unique and have specific institutions that cannot necessarily be emulated elsewhere, something I’ve argued before.
In fact, that last point is true of every country. The path to development and sustained growth is different for every country, and the recipe for growth cannot be captured in vague platitudes about a ‘free market’, completely devoid of context. I expect that there exist countries where neoliberal reforms are appropriate, but these are far outweighed by one where they are not. The people best suited to decide which reforms are appropriate are those who live in and understand the country, not outsiders with a one size fits all model that they see as a neutral template. This was clear even in Chile, where the national military were reluctant to abandon the state-driven model on which they had always relied.
I expect those who support neoliberalism might look at this article and conclude that countries would do even better if only those last pesky statist policies were removed. But this is a superficial perspective. Why were the state-supported industries much more successful than the privatised ones in Chile? Why do Scandinavian countries do well with high tax rates and big welfare states, when many countries with similar strength regulatory frameworks and smaller welfare states do much worse? Why does every purported ‘free market’ success story collapse under close inspection, and why are there no clear real world examples of the ideal being implemented and working? Until I can see such a case I will remain unconvinced of the virtues of the elusive free market.
It would be silly to suggest that all of neoclassical economics is simply ‘wrong.’ I happen to think much of it is, sure, but some is right, and some may be merely incomplete. However there is another possibility, one I want to focus on in this post: some neoclassical theories are sound only if one defines for them a clear domain. In mathematics, a domain refers to the range of numbers one can feed into a function (what you ‘do’ to the number) and get non-nonsensical (sensical?) answers. Similar rules apply to many scientific theories: the perfect gas model is not appropriate for steam; Newton’s Laws do not apply at very large or very small levels, etc etc.
Economists do already use domains, to a limited extent. This is mostly done in theories of the firm, for which there are different ones depending on the number of firms ranging from perfect competiton to oligopoly through duopoly and finally monopoly. These theories are only supposed to hold in industries with the appropriate number of firms. However, even within these there are few criteria for distinguishing between when a firm will behave, for example, Cournot-y (varying quantity only) and/or Bertrand-y (varying price only), and hence which of these models is appropriate. So economists might still have a hard time knowing when to use which theory. DSGE has similar problems.
One area I’ve been thinking might be more sound if a specific domain – agriculture – were applied to it is marginalist economics: specifically, the much maligned perfectly competitive theory of the firm. It is perhaps no coincidence that economists are rather keen on using examples from agriculture in their parables about marginalist concepts: it’s the area where their analysis is most appropriate. There are a few reasons to believe this:
(1) Agriculture, for the most part, has perfectly divisible inputs and outputs. These are a core assumption of basic producer (and consumer theory), one which is blatantly unrealistic in most cases. However, it may be realistic in agriculture. Food and fertiliser are literally perfectly divisible, as they can always be cut down to smaller quantities; certainly at any level relevant for production. Livestock are not perfectly divisible when alive, but even so they are generally farmed in large quantities that can be continually adjusted, so perfect divisibility is at least a good approximation. Tractors, ploughs etc. are example of indivisibilities, but they are not often purchased and can be thought of as the exception to the rule, covered under ‘fixed capital.’
(2) Diminishing marginal returns. Agriculture is one of the few areas where we observe rising costs as output rises. This is partly because a major factor of production – land – is fixed. This is a standard assumption for the short run neoclassical theory of the firm; with land, it is also true in the long run, though some improvements in productivity can be made over the long run with the aforementioned fixed capital expansions.
(3) Perfect competition. Nothing better resembles the atomistic neoclassical ideal than many farmers competing on a single market with homogeneous goods like wheat, not having any discernible effect on price. With certain foods, some product differentiation (through quality) might be observed but even this would be captured by the theory of imperfect competition. Overall, a farmer is less likely to have discretion over the price of what they sell than, say, a retail store, or a lawyer.
(4) Lack of clustering or ‘QWERTY‘ effects. It is an obvious observation that firms in particular industries tend to cluster together geographically. Manufacturing requires a continuous stream of inputs, so firms at different stages in the supply chain will group together to minimise transaction costs. Manufacturing often – though not always, to be sure – requires workers with a particular set of skills, so employees and employers who best match together will tend to converge. Services, by their nature, requires face-to-face interaction, as well as even more specialised skills, so they too will group together. In both cases the easy transfer of knowledge around clusters also helps significantly. Clusters become self-enforcing: you set up shop in a cluster because everyone else in your industry is there. QWERTY effects create emergent properties that may suggest a role for government intervention.
However, agriculture, in most cases, does not exhibit QWERTY-like characteristics. First, agriculture requires large expanses of land so it is difficult to create ‘clusters.’ Second, most agricultural labour is not particularly specialised. Third, agriculture also follows an obvious harvesting cycle, so rather than a continuous stream of inputs, there are intermittent large purchases of supplies, making transportation costs less of a systemic issue. Fourth, agriculture does not really rely on information about new trends, management, techniques or what have you; it has followed similar techniques for centuries.
The reader might note that I’ve primarily been referring to extensive agriculture, rather than intensive agriculture – market gardens and so forth. Intensive agriculture does exhibit some characteristics similar to extensive farming: it produces the same type of goods, for a start, so much of the above still applies. Nevertheless, the use of technology and organisation is greater than extensive farming, and market gardens generally take up a smaller area, which suggests that the perfectly competitive market may not be appropriate. Modern market farming might be thought as a way to ‘capitalist-ise’ agriculture, hence rendering the perfectly competitive theory inappropriate.
So what are the implications of this, for extensive farming at least? Seemingly, our conclusions will align with the conclusions of basic economic theory. Price controls and subsidies are not advised under normal circumstances or in the name of long term policy goals; a monopoly would probably not be a result of innovation and would be unlikely to be superseded by technology, and so would be unambiguously bad.
Most of all, economists will be pleased to hear that their favourite theory, comparative advantage, is more directly applicable in the world of agriculture. This is for two main reasons. First, the most commonly used rationale for why a country might have ‘comparative advantage’ – resource endowments – is obviously applicable in agriculture: nobody questions why the UK doesn’t try to create a cocoa industry, or why New Jersey doesn’t grow as much wheat as Iowa. Fertility of soil and climate are determined by powers mostly beyond humanity’s control, and we must specialise according to this. Second, unlike manufacturing, short term losses in trade will not strengthen an industry to the point where it is more efficient in the long term.
This is basically a ‘market knows best’ mantra that may not sit well with my regular readers. To be sure, there will still be exceptions where governments might intervene: environmental concerns; ensuring national self sufficiency; emergencies; basic standards. Nevertheless, the disaster that is the CAP, with absurdities such as food mountains and paying farmers not to use their fields, as well as the effect it has on farmers in poor countries, seems to illustrate that if economist’s favourite creeds hold anywhere, it’s in agriculture.
Model-wise, there will still be issues with perfect competition even in agriculture, where it is at its most relevant. I fully expect superior, more comprehensive theories than the perfectly competitive firm can be (and have been) developed for agriculture. Nevertheless, insofar as perfect competition might apply to anything at all, it seems most suited here. It would at least be a start for economists to admit certain theories have only limited application, instead of extrapolating highly restrictive models onto situations where they don’t apply.
Many economists will admit that their models are not, and do not resemble, the real world. Nevertheless, when pushed on this obvious problem, they will assert that reality behaves as if their theories are true. I’m not sure where this puts their theories in terms of falsifiability, but there you have it. The problem I want to highlight here is that, in many ways, the conditions in which economic assumptions are fulfilled are not interesting at all and therefore unworthy of study.
To illustrate this, consider Milton Friedman’s famous exposition of the as if argument. He used the analogy of a snooker player who does not know the geometry of the shots they make, but behaves in close approximation to how they would if they did make the appropriate calculations. We could therefore model the snooker player’s game by using such equations, even though this wouldn’t strictly describe the mechanics of the game.
There is an obvious problem with Friedman’s snooker player analogy: the only reason a snooker game is interesting (in the loosest sense of the word, to be sure) is that players play imperfectly. Were snooker players to calculate everything perfectly, there would be no game; the person who went first would pot every ball and win. Hence, the imperfections are what makes the game interesting, and we must examine the actual processes the player uses to make decisions if we want a realistic model of their play. Something similar could be said for social sciences. The only time someone’s – or society’s – behaviour is really interesting is when it is degenerative, self destructive, irrational. If everyone followed utility functions and maximised their happiness making perfectly fungible trade offs between options on which they had all available information, there would be no economic problem to speak of. The ‘deviations’ are in many ways what makes the study of economics worthwhile.
I am not the first person to recognise the flaw in Friedman’s snooker player analogy. Paul Krugman makes a similar argument in his book Peddling Prosperity. He argues that tiny deviations from rationality – say, a family not bothering to maximise their expenditure after a small tax cut because it’s not worth the time and effort – can lead to massive deviations from an economic theory. The aforementioned example completely invalidates Ricardian Equivalence. Similarly, within standard economic theory, downward wage stickiness opens up a role for monetary and fiscal policy where before there was none.
If such small ‘deviations’ from the ‘ideal’ create such significant effects, what is to be said of other, more significant ‘deviations’? Ones such as how the banking system works; how firms price; behavioural quirks; the fact that marginal products cannot be well-defined; the fact that capital can move across borders, etc etc. These completely undermine the theories upon which economists base their proclamations against the minimum wage, or for NGDP targeting, or for free trade. (Fun homework: match up the policy prescriptions mentioned with the relevant faulty assumptions).
I’ll grant that a lot of contemporary economics involves investigating areas where an assumption – rationality, perfect information, homogeneous agents - is violated. But usually this is only done one at a time, preserving the other assumptions. However, if almost every assumption is always violated, and if each violation has surprisingly large consequences, then practically any theory which retains any of the faulty assumptions will be wildly off track. Consequently, I would suggest that rather than modelling one ‘friction’ at a time, the ‘ideal’ should be dropped completely. Theories could be built from basic empirical observations instead of false assumptions.
I’m actually not entirely happy with this argument, because it implies that the economy would behave ‘well’ if everyone behaved according to economist’s ideals. All too often this can mean economists end up disparaging real people for not conforming to their theories, as Giles Saint-Paul did in his defence of economics post-crisis. The fact is that even if the world did behave according to the (impossible) neoclassical ‘ideal’, there would still be problems, such as business cycles, due to emergent properties of individually optimal behaviour. In any case, economists should be wary of the as if argument even without accepting my crazy heterodox position.
The fact is that reality doesn’t behave ‘as if’ it is economic theory. Reality behaves how reality behaves, and science is supposed to be geared toward modelling this as closely as possible. Insofar as we might rest on a counterfactual, it is only intended when we don’t know how the system actually works. Once we do know how the system works – and in economics, we do, as I outlined above – economists who resist altering their long-outdated heuristics risk avoiding important questions about the economy.
This is a compilation of my objections to the main arguments of right-libertarians (or propertarians) done as an FAQ (based on the fact that my FAQ for economists was pretty popular). I hope here to persuade libertarians that things are more complicated than their framework, neat as it is, implies. Whether it will succeed is another question.
Writing these arguments revealed an interesting recurrence: once the libertarian framework is picked apart, the debate collapses back to where it’s always been. The various binary distinctions libertarians make (voluntary/coercive, government/market, positive/negative liberty) fall apart upon critical inspection, and we then have to take things on a case by case basis in the fuzzy world of morality, trade offs and so forth. It strikes me that the libertarian framework tries to provide easy answers, to side step this debate.
What do you have against liberty? Why do you statists always try to rationalise ways to control our lives?
Slow down! If everyone who criticises you is automatically the bad guy, that doesn’t leave much room for productive debate, does it? For what it’s worth, I’d characterise libertarians as those who are so skeptical of the state that they think it should only protect the most powerful, but that’s no reason to dismiss them as the bad guys before we’ve even started. But more on that later – for now, just try not to assume I am Stalin reincarnated.
But libertarianism is about liberty. What justification do you have for infringing on liberty?
Again, this attitude leaves open the actual question of whether libertarianism really does improve individual liberty. Libertarians generally distinguish between positive and negative liberty, where positive liberty is the freedom to command resources to realise certain ends, while negative freedom is the extent to which one is (or isn’t) constrained by other moral actors. Since a low degree of positive freedom is, unfortunately, imposed by nature, the only things humans as moral actors can do is ensure we don’t restrict people’s negative liberty.
However, this distinction is functionally meaningless. A starving man at a shop cannot take food because he will be arrested or at least kicked out – he is constrained by another moral actor. The libertarian might reply that property rights helped create that resource, so the starving man is no worse off than he would have been without property rights. The my first response to this is “so what?” It doesn’t change the functional relationship between the starving man and the food, and begs the question of whether we can harness the resource-creating power of property rights to create more just outcomes. Or just let the guy have some food through redistribution.
Taxes are theft! Why do you think you can steal from people?
First, it would be easy to turn the question of wealth creation raised in the last section around on libertarians and ask exactly how the government can be said to ‘steal’ resources that its own actions created. Most innovation has its roots in government research and development, and many of the institution upon which capitalism is built are state-backed. These are the facts; going into unverifiable counterfactuals about how things would be better with ‘less’ government is just speculation. The moral question of whether government should ‘intervene’ is undermined by the fact that it already has.
Even more importantly, institutions strongly influence the pretax income distribution. The enforcement of property rights, contracts and the prevention of force, fraud and theft does not avoid significant political decisions. For example, implied contracts are an incredibly tricky area of law; so are intellectual and environmental property rights, where the nature of the property itself raises difficult questions. Ownership of some things (votes, people, identities) is generally prohibited, as are certain contracts (slavery, murder-suicide pacts, anything entered into by children/the mentally ill). All of these decisions, and many more like them, will involve value judgments, historical path dependence, and sometimes arbitrary decisions. And they will all influence patterns of production, distribution and exchange. There is no neutral ‘baseline’ distribution, and there is no way of keeping politics out of distribution. A similar argument can be made about individual choice.
But if distribution results from voluntary actions, then what is the problem?
Obviously, even if decisions are voluntary, they will be influenced by the types of political decisions outlined above. But even beyond that, there are two problems with the ‘voluntarist’ perspective.
The first is the binary distinction between ‘voluntary’ and ‘coerced’ action, which leads to a lot of problems. Using it, I could argue that nobody in the developed world is really ‘forced’ to obey the law, because they could move country. Obviously it would be silly to say this: one can’t expect people to uproot themselves from their family, friends, location and career, so functionally people do not have much choice about obeying laws. Another example of the limitations of the libertarian line of argument is that one could use it to frame the decision not to obey the law as a ‘voluntary trade off’ between, say, prison and the alternative.
A better way to think of the distinction between voluntary and involuntary action is as a spectrum. We might consider the degree to which someone’s action is voluntary as how much it is influenced by factors outside the persons/objects involved in the immediate decision. Under such criteria, few actions can be considered truly ‘voluntary’; there are always outside influences on decisions, however small or large. At the less significant end of the spectrum we might have travel costs; we might then go through peer pressure, then, for workers, the threat of poverty. We would end up at something like the threat of being killed or tortured. The extent to which actions are voluntary must be considered on a case by case basis; we cannot just make a binary distinction and apply one size fits all based policies on this basis.
The second problem with voluntarism is the Nozickean justice principle most libertarians implicitly or explicitly respect. This is based on the idea that if voluntary actions led to a situation, that situation must be just. This problem is perhaps best illustrated within one of Robert Nozick’s own thought experiments: the Wilt Chamberlain example (as it goes, this is also a situation where one could accurately describe the agent’s behaviour as purely voluntary). Nozick suggests that if everybody at a basketball game volunteered to pay Wilt Chamberlain a small amount of money, the end result would be a vastly unequal income distribution, but since everybody had donated ‘voluntarily,’ there would be no problem regarding the justness of the outcome.
But while it is true that everybody at the basketball volunteered to donate their own money, it is not true that they agreed to anyone else donating money, and it is certainly not true that they all agreed to everyone collectively donating a fortune. The principle is actually based on a subtle switch from individually voluntary choices to collectively voluntary ones, one which doesn’t hold up to scrutiny. The libertarian may reply that the choices of others are none of my/other’s/the state’s business. But if the inequality has pernicious effects (which is a separate issue) then it is very much everyone’s business. Since the voluntarist principle cannot be applied collectively, we are back to discussing the effects of inequality. This disparity between individual choices and collective outcomes is the reason we have voting, political movements and so forth to help
Politics? Don’t you know any public choice theory? Democracy is a sham!
Well, modern democracy is probably a sham. But overall, public choice theory is simply refuted by the evidence, something that people do not note nearly often enough. Political scientists have known – and empirically confirmed – that voters and politicians mostly act in what they perceive to be the public interest, rather than for selfish gains. This isn’t to say that there is no truth to public choice theory, but evidence suggests it is more appropriate to model politicians and voters as public servants who are buffeted by special interest than as selfish maximisers who occasionally stumble upon a beneficial policy. The result is that democracy is far more effective a tool for translating collective interests into policy than libertarians might suggest.
But government action, democratic or not, rests on the initiation of force. When is that ever justified?
The special status libertarians accord to ‘force’ falls apart even on its own terms. For the fact is that most laws are not actually enforced by force, but by credible threat of force. These are, by definition, two different things. I know that if I try to go into a night club without permission, the bouncers will stop me or drag me out. This isn’t the same experience, and doesn’t have the same moral implications, as them actually dragging me out when I do run in. The relationship between the individual and the law can also be applied to laws libertarians approve of: to argue that credible threat of force is the same as force is to argue that people are constantly the object of coercion due to what they can and can’t do because of other’s property rights. Overall, the reduction of all laws to someone forcing you to do things at gunpoint is a stretch to say the least.
Regardless of force, governments cannot know better than individuals/the market. So why should they intervene?
The framing of governments versus markets is largely a false dichotomy. I have already noted the inevitable political decisions that go with even what libertarians consider their baseline institutions. Beyond this, there are laws such as immigration, limited liability, laws that define shares and protect shareholders, laws that define companies, and so forth. These so-called ‘interventions’ do not require a government to ‘know better’ than any one individual; they were defined to have a systemic impact that cannot be enforced by any individual or group of individuals. Furthermore, the question of where we draw the line between ‘intervention’ and ‘the market’ is up for debate. Or it doesn’t really exist.
Even if the government backs the institutions required for markets, it sucks wealth out of the economy to do this. Hence, it should do as little as possible, right?
Saying ‘governments can’t create wealth’ is a sweeping, largely vacuous statement based on a superficial zero sum view of taxation as being ‘extracted’ from the private sector. In fact, taxation is just one prong of a symbiotic relationship that exists between the private and public sectors. If we take the definition of wealth as the creation of valuable resources, it’s clear that, say, teaching and infrastructure ‘create wealth.’ We’ve already seen just how large a source of wealth the government can be through its funding of research and development. Furthermore, many state-backed institutions are historically a prerequisite for substantial wealth creation to take place at all. Again, obscure, selectively interpreted examples like Medieval Iceland, or speculative counterfactuals about what things would be like without the government are ahistorical wishful thinking. Give me a clear example of capitalism as we know it coming out of nowhere and I’ll give you the time of day.
That reminds me – you seem to be primarily referencing minarchist libertarians. What about anarcho-capitalism?
Anarcho-capitalist, as far as I’m aware, have yet to answer exactly what a landowner is if not a de facto state. A state is defined over a particular territory, and (theoretically) has control over what happens in that territory. Ownership is also defined as having control over an object; in the case of land, this quite clearly leads to each land owner effectively being a sovereign state, however small. People do not have a ‘choice’ of whether they exist on land, and nobody created land, so there is no justification for those with ‘the biggest gun’ controlling it, while those without land are at their whims.
The extremely unsatisfactory response that, for some reason, everyone would respect the libertarian ideal and not engage in force, fraud and theft is really just wishful thinking. I can’t help but wonder what libertarians would say if a socialist made a similar argument about people suddenly becoming angels under socialism. Similarly, any response that centered on how landowners would be competitively inclined to do Good Things could equally be applied to states, so would be an exercise in special pleading.
OK, maybe you’re not Stalin. Do you have anything else worthwhile to say?
Probably not, but just in case, here are some more of my posts on libertarianism:
See here for more on the flawed positive/negative liberty distinction.
See here for a discussion of the problems with seeing ‘government’ as a homogeneous, all-encompassing entity.
See here for my criticism of libertarian’s perceptions of individual choice.
See here for a more detailed discussion of the faulty government/market dichotomy.
I have only really just started studying Marxism in depth (though I am stopping short of Capital for now). Subsequently, while reading Bertell Ollman‘s Alienation: Marx’s Conception of Man in a Capitalist Society, it once again struck me that (right-)libertarianism is really just lazy Marxism. In many ways libertarianism reads like the first third of Marxism: the area which explores methodological questions and the nature of man. Both libertarianism and Marxism are generally fairly agreeable – and in agreement – in this area, but the former never really fleshes out its arguments satisfactorily. Often I find libertarians, after describing some basic principles (non coercion etc.), make the jump to property rights and capitalism being the bestest thing ever, without fully explaining it.*
I will focus primarily on Robert Nozick and Ludwig von Mises here, as they are the only two libertarians who really explored libertarianism from basic principles of man and his relationship to both nature and economic activity (Murray Rothbard was really an interpretation of Mises in this respect). Overall, I think Nozick and Mises combine to form a fair reflection of minarchist libertarianism.
The state of nature and the nature of man
In Anarchy, State & Utopia, Robert Nozick’s ‘State of Nature’ is one where there is no state (government). He asserts that individuals have rights to protect themselves from aggression, they have rights to the fruits of their labour, and they have the right to cooperate voluntarily, free from deception and theft.
It has always struck me how incomplete Nozick’s exposition of the state of nature is. That man should be a priori free from aggression and entitled to whatever he produces is not really in dispute. What bothers me is that Nozick never really attempts to explore the relationships between different men, between men and society, and between men and nature. For Nozick, an abstract expression of individual rights could be extrapolated up to the whole without much discussion of how things link together. This is especially odd because he demonstrated he was capable of understanding and the limits of such individualism in his incisive critique of methodological individualism. So much the worse for his philosophy that he didn’t apply this thinking to it.
Enter Marx. Marx emphasised that, naturally, man had ‘powers,’ which are the means by which he achieves specific needs. Eating is a power; hunger is the relevant need. Thinking is a power; knowledge is the relevant need. (The former is a ‘natural power,’ common to all animals; the latter is a ‘species power,’ specific to man). By exercising different powers, the individual emphasises different aspects of themselves, and depending on who they are with, which society they are born into, and their available resources, different aspects of the individual will appear to be important, and different conceptions of freedom, happiness, and even the individual himself will emerge.
This may seem like a digression, but in fact it is essential. Once you have established that the abstract individual, when interacting with society, with others, is a very different beast to a lone man in the woods, it leads you down a different ethical path. What becomes important are the interactions the individual engages in, rather than merely the individual himself. It is not enough merely to say an individual should be granted certain rights and that’s that; we have to explore how these rights affect the individual, even by virtue of being defined.
To define every man as an island who cooperates with society and others only through discrete voluntary actions is to diminish the importance of how society and others shape these actions. More than this, it ignores how the rights themselves interact to produce outcomes that may be inconsistent with the principles upon which those same rights, in abstract, were built. Libertarians will likely think I am about to suggest we strip individuals of their rights, but this is not the point. The point is that the rights are not a neutral baseline, and the emergent relations governing these rights could be opposed to individual freedom.
For example, private property is surely the foundation of libertarianism (private property is to be distinguished from possession, btw). But Marx did not think private property, the division of labour, wage labour and capitalist exchanges could ever take place independently; one necessarily implied the other. Any degree of material wealth that qualifies as ‘property’ implies accumulation, which implies producing more than one labourer can manage, which implies employing others, which implies splitting up their tasks into specific, repetitive actions, which implies that what they produce is not necessarily what they need to survive, which implies they must purchase this elsewhere, and so forth. Adam Smith observed this interrelation when he noted that, “as it is the power of exchanging that gives occasion to the division of labour, so the extent of this division must always be limited by the…extent of the market.” I will explore why this may be undesirable from the point of view of individual freedom below, but for now it is sufficient to show that such an emergent property amounts to more than the individual rights from which it originates.
Purposeful action is productive action (which is why capitalism sucks)
Mises claimed man acts to attain certain ends, and only by achieving these ends can he be said to engage in purposive action. If there were no ends to be sought, man would not act; that he acts tells us he has unfulfilled needs. Voluntary exchange gives man the choice and ability to engage in purposeful action with an ever-expanding range of ends at his disposal. The entrepreneur’s role in this is vital, as he channels the purposive actions of many people in the market place, allowing them to attain the ends they seek. This creates an evolutionary process through which man continually realises his chosen ends.
Marx too believed that only man is capable of purposive activity, and this is what separates man from other animals. However, for Marx, the most purposive activity was labour, not consumption. Man engages in productive activity for two main purposes: (1) the end product of his labour and (2) the ability to exercise certain powers of his choosing when labouring, for whichever reason he deems appropriate (efficiency, enjoyment of the task itself, development of skills, etc.). Marx saw capitalism as alienating because in a capitalist system, the individual becomes separated from both the product and the method of production, as well as the time and location in which it takes place.
This separation can be illustrated by an exchange between the worker and the capitalist. The capitalist pays the worker wages so the worker will produce what the capitalist requires him to produce. In this exchange, the worker becomes separated from the product of his labour, producing not what he wants, but what the capitalist requires him to produce. The worker is also required to produce not how he chooses, but at a time, location and in a manner chosen by the capitalist. The worker then uses the wages he earns to purchase other products produced under similar circumstances. The end result under capitalism is that individuals become primarily tied together by what the capitalist guided division of labour demands, rather than by their own autonomous, purposive action. The result is the worker’s alienation from his own labour and also from the products he purchases (this applies to the capitalists too, in a different form; after all, they are on the flip side of the relationship).
So we have two competing narratives here. In one narrative, the individual is merely at the whims of capitalism, while in the other narrative, the individual exercises control over capitalism. Which is more accurate? Ultimately, the question boils down to whether production or consumption is the more purposive activity.
In consumption, the means is exchange, which requires little in the way of personal development or planning, and is brief. What matters most in consumption is the end result: a good or service. Many goods purchased are interchangeable and the act of consumption is relatively brief.** Services are by definition done by somebody else, and generally speaking, the buyer is only interested in the end result (the outcome of a lawsuit; their health; a new conservatory). I’m not suggesting that purchasing goods and services is not useful and does not yield any positive results; I am merely pointing out that as far as man’s self-actualisation goes, as far as purposive action is defined, consumption does not require or achieve much in the way of planning, personal development or uniqueness.
In contrast, during production the individual has both means (productive activity) and an end (the product) in mind when he sets out to act. The productive activity itself cannot be separated from the individual and so the two are inextricably intertwined. Furthermore, productive activity requires and/or results in building up some personal attribute, whether a individual’s capacity to reason, his physical strength and fitness, his perseverance or anything else. Generally these attributes will last beyond the original act of production. The end result is both that the individual achieves some goal he chose, planned and set out to achieve, whatever its exact nature, and that through the process he exercises his individualism by realising certain powers (again chosen by him).
The question for Miseans is how exactly the individual can “discover causal relations” between his purposive productive activity and what he produces if he is not producing what he wants, but doing it under the command of someone else. Mises glosses over the role of the worker in his exposition of purposive action; in fact, he explicitly rejects the notion that labour can be considered ‘action,’ because he considers only ends, rather than means, important for man’s individual development. But are any of man’s actions as rational, as explicitly thought through, as deliberate and purposeful, as labour? For Marx, the tragedy was when labour became a means to an end; Mises merely assumed this was the case.
The heart of libertarianism is the abstract individual, who engages in voluntary actions to attain certain ends, and should be allowed to do this, free from outside interference. But such an abstract philosophy is incomplete and incoherent. In the mainstream, Marx is often projected as disregarding the individual, but in fact, Marx was always highly concerned with the individual. The difference is that Marx’s concern with the individual caused him to zoom out to see the context in which the individual operates, and which aspects of an individual’s character are shaped by the context in which the individual labours. Under capitalism, the most important aspect of purposeful individual action – production – is subsumed, under the command of somebody else, and spurred only by the fact that the work is necessary for the worker’s survival.*** Hence, within his most purposive sphere, the individual is not free to act to realise his own ends through means chosen by him; rather, both the ends and the means are determined by forces outside his control. To me, this doesn’t seem very libertarian.
*To be sure, libertarians do have plenty of fleshed out arguments for capitalism’s efficacy as a system; what I am arguing is that it does not follow from their discussion of man and his nature.
**This has the exception of durables, but how often is the joy of these based on one’s own work on them? Cars, houses and gardens are all the pride and joy of people precisely because they themselves engage in productive activity on them.
***We must remember the context (!) in which Marx was writing. What he says was literally true at the time; in modern liberal democracies the reality is less stark, but the underlying mechanics of working life, and why people work, remain the same.
PS I have used ‘man’ in this post because that is generally what was used by the thinkers I am discussing. I originally tried it with gender-neutral pronouns but it just became confused and more difficult to relate to the original texts.
It doesn’t make any difference how beautiful the hypothesis (conclusion) is, how smart the author is, or what the author’s name is, if it disagrees with data or observations, it is wrong.
- Richard Feynmann
Our empirical criterion for a series of theories is that it should produce new facts. The idea of growth and the concept of empirical character are soldered into one.
- Imre Lakatos
A remarkable characteristic of economics is the sheer staying power of theories, even with a lack of empirical evidence to corroborate the propositions of these theories. In my experience, it is not uncommon for lecturers to remark that the lack of evidence for a theory has been a ‘problem’ for economists (though apparently not enough of a problem for them to throw out said theory). Often textbooks, lectures and discussions of theory make no reference to evidence whatsoever, and where they do it is trivial (for example, representative agent intertemporal macroeconomic theory predicts that governments will run periods of deficits followed by periods of surplus).
In the paragraphs that follow, I’ll examine a few cases of where I believe economics has gone off the mark in this respect. Specifically, I evaluate Marginal Productivity Theory, Walrasian Equilibrium, and The Solow Growth Model. I avoid theories such Real Business Cycle models and the Efficient Markets Hypothesis, partly because they have been done to death, but more importantly to demonstrate that the bad theories in economics are not merely the result of a few ‘wild cards’ at Chicago. On the contrary, I believe an anti-empirical approach is institutionalised within mainstream economics and that economics must undergo a paradigmatic shift to move away from these theories.
Marginal Productivity Theory (MPT)
The common interpretation of MPT is that it predicts workers will be paid ‘what they’re worth.’ In fact, this is not correct; the theory predicts that average productivity of workers will be positively related to wages, rather than each worker getting precisely their ‘just desserts.’ In any case, the result is that MPT predicts that compensation will increase as productivity increases. Hence, graphs such as this one – which you have likely seen before – pose a problem for MPT:
I have seen several responses to the problems presented by graphs like this. The first is that non-wage benefits have risen, which isn’t shown in this data. The second is that the adjustments for relative prices have been incorrectly applied, and consumers have more purchasing power than it first seems. However, estimates exist which take all of these things into account, and they still come to the same conclusions: most people’s overall real compensation is not increasing, even though their productivity is.
Another response would be that marginal productivity did well until the 70s, so maybe it remains useful. This is special pleading. A theory must be equipped to explain all phenomenon within its domain (in this case the labour market), rather than selectively applied where it suits the economist. If the laws of physics suddenly stopped working, can you imagine physicists making this defence? Saying ‘MPT will work except when it doesn’t and if it doesn’t we will throw our hands up in the air and carry on’ is not science. The fact is that such a sudden and clear decoupling of wages and productivity poses a clear problem for advocates of MPT, one which requires either a thorough explanation or discarding the theory altogether.
Walrasian equilibrium is one of the more absurd pieces of theory in economics (which is saying something). There are two (rational) agents with endowments of two factors of production, which they hire out to two profit-maximising producers. The producers use these factors of production to create two consumer goods, then the consumers purchase them. Everyone behaves as if they are perfectly competitive (they can’t influence prices) and everything happens simultaneously. There is no direct trade; instead individuals trade through the market (which comes from
god outside the model).
The behaviour of consumers in this model is tautological. They consume based on a predetermined utility function that cannot be observed. Hence, they consume what they were always going to consume based on the chosen, non-empirical parameters of the model. This doesn’t tell us anything.
The behaviour of producers in this model is observable in the real world and hence not tautological. It is also not what happens in the real world. Some firms maximise profits, but most don’t; those firms that do maximise profits equate MC and MR is clearly false.
The only prediction this model as a whole makes is that the initial distribution of endowments will affect what is produced, how it is distributed, how much is produced and the price of what is produced. In other words: the initial resource distribution of a market economy affects its subsequent workings. This is trivial, and easily shown by theories that are based on more realistic assumptions (such as Sraffa).
The Solow Growth Model
The Solow model, to me, seems to be a textbook case of ‘bad science.’ This is clear from the story of its development (a story anyone who has taken development or macroeconomics will know).*
The Solow model predicts that, due to diminishing returns to capital, developing countries will catch up with developed countries in terms of GDP. At a low level of capital stock, the potential returns to investment are high (e.g. irrigating/ploughing a previously unkempt field). As the stock of capital increases, the returns to investment decrease and the growth rate of a country balances out. Hence, all countries will converge to a similar long term growth rate.
That this prediction is false is no longer debated. In the 1980s, William Baumol provided evidence that seemed to support the hypothesis. This was quickly disputed by Brad Delong, who noted that Baumol had used a sampling bias – he only included countries which were developed, effectively assuming his conclusion. Delong included more countries and found no evidence of convergence.
However, economists weren’t ready to give up. The prediction of the Solow model was reframed as conditional convergence: that is, provided countries have the right institutions, social cohesion, etc. they will converge in terms of growth. This, to me, seems trivial. The entire point of development economics is that the conditions in poor countries are not conducive for them to develop and so catch up with the developed countries. The Solow model doesn’t ask how a country might achieve this, but only says that it is a necessary condition for development, something development economists have always known. Hence, the Solow model is irrelevant for the immediate problem of development economics, which is how exactly we can help poverty-stricken countries get off the ground.
Is Economics That Bad?
In the interests of balance, it is worth noting some predictions made in economics that have been either empirically verified or dropped subsequent to falsification. Quantity of money targeting was tried, and failed, in a few countries, which led to Milton Friedman himself repudiating it (though economists still erroneously use the same framework which led to it). The lifetime consumption hypothesis (and non-utility based consumer theory in general) display good empirical corroboration and have all the hallmarks of a ‘good‘ scientific approach. The Phillips Curve as used by economists was modified in light of evidence in the 1970s. Both the multiplier and the Giffen Good are good examples of non-trivial, clear, falsifiable predictions, though I will not comment on evidence for them because that would take a post for each one.
Nevertheless, the record as a whole is not good. Theories from over a century ago look, and are taught, the same way as they were when they were initially adopted. New ideas that are not even disputed by economists, such as behavioural economics, are slow to be adopted, and when they are adopted are presented as a ‘special case’ and in a way amenable to the core framework, which is, of course, still taught alongside them. As far as I’m aware, there is no clear cut case of a neoclassical theory being completely thrown out and never mentioned again. This alone should be an indicator that the scientific method is not at work in economics.
Commenter Dan thinks economics has not yet found its watershed moment:
Think about Biology before DNA was discovered or Geology before plate tectonics was understood, both disciplines had learned a lot but they still lacked a comprehensive model that made everything fit into place.
I am sympathetic with this viewpoint. Heterodox criticisms come at economists thick and fast – personally, I think most of these criticisms are valid and very little of neoclassical economics should be left. Yet neoclassical economics persists.
However, in my opinion this isn’t because economics lacks a unifying theory; it’s the exact opposite. Economists already think they have found a unifying concept: namely, the optimising agent. Consumers maximise utility; producers maximise profits; politicians maximise their own interests/their ability to get reelected. Sure, there are a few constraints on this behaviour, but overall it is the best starting point. It all blends together into a coherent theory that can tell a plausible story about the economy. I find economists are resistant to any theory that doesn’t follow this methodology.
The typical definition of economics is the study of how resources are allocated. Hence, a unifying theory should empirically and logically do a satisfactory job of explaining prices, production and distribution. Such a theory would be able to underlie virtually any economic model in some form, whether being the wider context of a microeconomic phenomenon, or the basis of macroeconomic phenomenon. No easy task, then, but luckily many approaches of this nature already exist.
Alternative Theories of Behaviour
If we want to stick with agent-based explanations of the economy, there are any number of alternatives to the ‘optimising’ agent. Among these are:
I consider all of these approaches useful, but none of them sufficient for the task at hand.
In the case of the first two, replacing ‘optimising’ agents with ‘satisficing’ agents isn’t exactly revolutionary. Maslow’s hierarchy can, in fact, work as a utility function. In both cases, we still run into similar problems of aggregation and of reductionism. And we end up trying to shoehorn every decision into a particular approach. The simple truth is that agents have a lot of different motivations for their actions and sometimes these aren’t always clear, even to them.
My main issue with these, and any agent based approach, is that they aren’t necessarily relevant for the wider question of resource allocation in society. Individualist-based neoclassical economics has to reduce things down to a few agents with only a few goods in order to have any conclusions whatsoever; I can’t help but feel similar problems would emerge here. Class struggle may determine distribution but it doesn’t tell us much about what is produced and at what price it is sold. In order to understand how production takes place and prices are determined, we will have to look elsewhere.
A Theory of Value
The value approach has a lot of pluses. A theory of value underpins the explanation of relative prices, and also has normative implications that recognize the inevitable value judgments in economics. The only problem I have here is that I’ve yet to find a convincing theory of value – the two most widely known are the neoclassical/Austrian subjective theory of value and the Labour Theory of Value (LTV).
I object to the idea that prices merely reflect subjective valuations for the basic reason of circularity: prices must be calculated before subjective valuation takes place, so they cannot purely reflect subjective values.
I have more sympathy with the LTV (mostly because its proponents seem to have coherent responses to every criticism thrown at it), but I remain unconvinced. The defences of the labour theory of value tend to rest on appeals to ‘the long run’ and ‘averages” of socially necessary labour time. These may be useful, but, like the neoclassical ‘long run’ approach they seem to leave open the immediate question of what’s going on in the economy and what we can do about it.
In my opinion, these approaches both contain some validity, and are not mutually exclusive. I tend to agree with Richard Wolff, who asserts that suggesting one has refuted the other is like saying knives & forks have refuted chopsticks. Both are useful; neither are all-encompassing theories. I also believe both are compatible, to some degree, with my favoured approach:
The ‘Reproduction and Surplus’ Theory
This approach is the one emphasised by Sraffians and Classical Economists. It starts from the basic observation that society must reproduce itself to survive, and that generally society manages this, plus a surplus. The reproductive approach emphasises what I believe to be an important aspect of capitalism, and perhaps all systems: the collective nature of production. Industries are interdependent; people work in teams; various institutions, often state-backed or provided, underlie all of this. Hence, no special moral status is accorded to prices or the allocation of surplus, except that prices must be appropriate for the continued existence of industries and society as a whole.
On first inspection the ‘insight’ that society must reproduce itself might be considered trivial, but following through its implications can yield interesting and useful conclusions. The framework can be used to determine prices technically, independently of either preferences or values. It emphasises the interdependent nature of the economy: if one industry or input fails, it has severe knock on effects. For this reason, it would do a great job of explaining both the oil shocks and resultant stagflation of the 1970s and the 2008 financial crisis, something modern macroeconomics cannot manage.
On top of this, the model is versatile: it can interact with its institutional environment, which determines key variables exogenously (e.g. the monetary system determines interest rates, political power determines distribution). The classical approach is, for example, compatible with class theories of income distribution, post-Keynesian theories of endogenous money and mark-up pricing, and even neoclassical utility maximising individuals! Probably the most promising and complete framework out of them all – I look forward to further developments of this approach.
It is feasible that the task of finding a watershed moment is not possible in the fuzzy world of social sciences. Psychology and sociology are both characterised by competing approaches; psychology in particular has improved since the
neoclassicals Freudians were dethroned. If neoclassical economics has taught us nothing else, it’s the importance of not being trapped by particular theories for want of elegance, which is why there is a lot to commend in the institutional school of economics.
Nevertheless, I think there is scope for exploring unifying principles. Progress in neurology may provide such a foundation for psychology; similarly, ideas such as societal reproduction could equally be applied to sociological concepts such as the role of beliefs, class, sports or what have you. As far as economics goes, such a substantial step forward could be what’s required to displace neoclassical economics, whose staying power, in my opinion, cannot be accorded to either its empirical relevance or its internal consistency. Perhaps neoclassical economics persists simply because its building blocks are so well defined that other approaches seem too incomplete to offer their opponents sure footing.
In mainstream economic models, consumer’s behaviour is generally assumed to follow a ‘utility function.‘ Consumers derive utility (creatively measured in ‘utils’) from whatever they consume, and they will attempt to maximise this subject to their budget constraint – and, perhaps, at a later level, some extra terms to incorporate behavioural quirks, social pressure or what have you. Unfortunately, even with modifications, the concept of utility is an explanation of behaviour that is questionable at best.
The first conundrum – as posed in the title of this post - is exactly what form utility takes. Is it supposed to be some sort of cumulative attribute that people collect as they go through life, like a stat on a video game? Or is it a temporary sensation experienced after consumption, so that economic agents are effectively utility junkies, chasing around temporary highs? There may be a case for regarding anyone who truly maximised utility as clinically insane and in need of help. In any case, thoughtlessly following predetermined utility functions leaves neoclassical agents with no real room for ‘choice’ – we know what their behaviour will be in advance, and it is unchangeable.
There is also the problem of fungibility: is it fair to suggest that joining a gym gives someone the same kind of satisfaction as eating a donut? Or that eating a donut gives the same feeling as owning a car? These nuances are lost in the aggregated world of ‘utils,’ a unit which has no relation to anything else and hence is hard to verify – at its worst, utility is simply circular: only measurable by the same behaviour it supposedly explains.
Economists have a standard response to contentions that utility is unrealistic. They will assert that, even though utility doesn’t really ‘exist’ – a position few would endorse, surely – it still follows that if preferences follow economist’s axioms, then an effective utility function can be derived. That is: utility is not meant to be taken literary, but economist’s assumptions are sufficient to ensure a relationship between preferences that is functionally the same thing. So it would appear the only way out for opponents of utility is to critique the axioms. I don’t believe this is true, but the axioms are worth critiquing before I explain why.
The two most important axioms required to derive a basic utility function are completeness and transitivity. There are other axioms that are also commonly used – independence, non-satiation, convexity - which are all vulnerable to criticisms, but since they pertain to the the exact form of a utility function, rather than the concept as a whole, I will focus only on completeness and transitivity. Without these, there is no utility function, whichever way you paint it.
The first axiom – completeness – is the idea that all relevant decisions can be definitively compared to one another: that is, there is no room for ‘I don’t know.’ There are clear problems with this. Often, it is hard to choose between two options, particularly if one is a bundle of many goods (e.g. two shopping baskets). In fact, as a decision rule this is generally computationally impossible. So people may act based on chance or impulse; they may seek advice or ask someone else to make the decision for them. What’s more, often people find it difficult to evaluate choices even after they’ve made them. Sometimes there is no ‘correct’ choice!
The other axiom, transitivity, implies that people will be consistent in their ordering of preferences. If I prefer A to B, and B to C, I will prefer A to C. It is an important axiom because, even if preferences are complete, a violation of transitivity means that utility functions can basically have any shape and therefore be pretty useless for clear calculations. While I expect numerous behavioural quirks suggest transitivity may be violated under certain circumstances, overall it is a fair axiom – for the individual. However, it has been known for some time that, once we have more than two agents, it becomes impossible to establish a clear, consistent ordering of preferences for the group. This isn’t moving the goalposts: it is highly relevant when we are using representative agents for the entire economy. (This problem also applies to the aforementioned independence axiom).
My most important point, though, is that even if preferences do follow all the axioms, utility is still highly flawed. This is because, like so many neoclassical models, all utility functions give us is a static snapshot of the economy (or individual) at a particular point in time, and there is no room for change. The simple fact that preferences are highly volatile and will be different in the morning and the evening, or in summer and winter, is enough to render utility useless for practical questions about the economy, which must surely incorporate time. Similarly, preference reversal has shown that the way options are presented has a large impact on the choice made by somebody, suggesting again that underlying ‘preferences’ are highly subject to change, and not really useful for the practical purpose of predicting behaviour. One can only wonder how utility might deal with a theory such as multiple selves, which would surely create the aforementioned aggregation problems for preferences, but for one person!
Now, I can almost hear the cries of “ah, but what is your alternative?” Actually, that doesn’t matter for the immediate critique. If I have a map of London Underground and I’m in New York, I’m not going to use it (even less so if I have a map of a fantasy land that exists only in the minds of economists). To push the analogy a little further, it is worth asking what I would do in this situation. I can think of two possibilities: either ask for help, or follow some simple rules of thumb based on what knowledge I have. This is the strategy economists should adopt.
In the case of ‘asking for help,’ what I mean is that economists should turn to other social sciences; namely, psychology, which has a far more empirically driven methodology than economics and has numerous explanations of behaviour. Economists truly interested in understanding human behaviour – rather than preserving their favoured assumptions – should collaborate with psychologists to create sound behavioural foundations.
Until then, economists should be content with simple empirically observed rules of thumb and intuitive aggregate relationships (they already do this with the marginal propensity to consume). Objections of ‘but Lucas Critique‘ are special pleading, since preferences are also liable to change with political decisions. In fact, I’d shout ‘Lucas Critique’ right back at economists, and suggest that they spend less time on the impossible task of making their models ‘immune’ to the Lucas Critique, while spending more evaluating the ever-changing relationship between policy and observation. It is better for economists to be vaguely right than precisely wrong.
Out of all the concepts in neoclassical economics, none is more imaginary, absurd and empirically falsified than utility. Economists supposedly follow a methodology of strict positivism, and based on the experimental evidence against utility, there is surely no reason to keep it. Yet for some reason, it doesn’t seem to attract the same level of criticism as other areas of neoclassical economics. Personally, I am puzzled as to why.