I’ve decided to put the blog on a brief hiatus for the exam period. Thanks as always to all my readers and especially to the excellent set of commenters I’ve managed to accrue. I’ll be back in mid June after exams have finished, with a fresh set of reasons both libertarians and economists are silly.
PS feel free to leave links, books etc. that you think I’d like in the comments. Consider this an open thread.
PPS I will continue tweeting, though it will probably be quieter.
PPPS there is always the possibility I’ll end up posting tomorrow, as when I tried a hiatus last year. But I’m feeling a bit more drained in general this year so it’s less likely.
Sometimes it seems like economist’s pet principles are applied selectively, in such a way that they attack ideas generally endorsed by the left end of the political spectrum. This isn’t to say economists themselves are ideologically inclined toward any opinion; merely, that key aspects of their own framework, and the way they present these aspects, lends itself to a more ‘right-friendly’ way of thinking.
In part, the issue is merely one of a disparity between how economists present issues to the public and how they speak to others in academia. Dani Rodrik noted this issue in his book The Globalisation Paradox. Here he describes a situation where a reporter asks an economist whether free trade is beneficial:
We can be fairly certain about the kind of response [the reporter] will get: “Oh yes, free trade is a great idea,” the economist will immediately say, possibly adding: “And those who are opposed to it either do not understand the principle of comparative advantage, or they represent the selfish interests of certain lobbies (such as labor unions).”
Rodrik then contrasts this with how such a question would be answered in the classroom:
Let [the student] pose the same question to the instructor: Is free trade good? I doubt that the question will be answered as quickly and succinctly this time around. The professor is in fact likely to be stymied and confused by the question. “What do you mean by ‘good’?” she may ask. “Good for whom?… As we will see later in this course, in most of our models free trade makes some groups better off and others worse off… But under certain conditions, and assuming we can tax the beneficiaries and compensate the losers, freer trade has the potential to increase everyone’s well-being…”
This adherence to basic, market-friendly principles over nuance can be found often in ‘pop’ economics: for example, economist Paul Krugman does it in his book Peddling Prosperity. The book is intended as an survey of nonsensical ideas from both the left and the right, remedying them both with a cold hard dose of facts, plus some basic economics. However, Krugman treats the left and right somewhat asymmetrically: with the right, he primarily opts for facts, whereas with the left, he uses economic principles
This is quite possibly because the right’s arguments, though they are taken to an extreme, have economic principles on their side, while the left’s do not. The ‘supply side’ economics that Krugman takes issue with is really just an extreme statement of the well known principle of deadweight loss, which suggests that taxes decrease output. If taxes reduce output by enough, then it logically follows that not only output, but overall revenues might fall if we raise taxes. Krugman would not question the principle, so he spends several chapters documenting evidence against the idea*.
Krugman then follows this up with a section berating the ‘strategic traders’, endorsed by Bill Clinton and others on the centre-left. Strategic trade suggested a role for government policy in promoting industry, because various clustering effects, economies of scale and positive feedback loops could mean that the initial wave of government investment could kick start an industry. As Krugman himself notes, such dynamic effects and ‘historical path dependence’ could render comparative advantage obsolete, since comparative advantage posits a more fundamental, innate reason a country produces a particular good, one that cannot be changed with policy (one that may be more applicable to agriculture).
Yet, in contrast with his section aimed at refuting the right, Krugman offers scant evidence suggesting government intervention doesn’t work. Instead, he effectively restates the theory of comparative advantage, coupled with a typical story to illustrate it. This is despite explicitly suggesting it might not be applicable in the previous chapter. When pushed, Krugman is prepared to fall back on his pro-market principles, even in areas where he knows they may not apply.
William Easterly does something similar in his book The Elusive Quest for Growth. The book is a survey of various policies than have purported to be panaceas for development, such as education, investment and population control. (As you can see, economists really love writing their “I’m an economist, here’s how it is” manifestos). Easterly finds every supposed development panacea wanting based on the available evidence, which is fine. However, occasionally he supplements his arguments with an excruciating example of ‘economic logic’ that always looks out of place.
For example, in the section on increasing availability of condoms, Easterly essentially makes the argument ‘how could people be lacking condoms? If they were, the free market would provide them!’ I am reminded of the joke about the economist who does not pick up a £10 note from the ground, because, if it were really there, somebody would already have picked it up. Easterly is a smart guy with a lot of concern for the poor, and I have a hard time believing he wouldn’t agree that a country might lack the institutions to deliver condoms, that people might lack the education to know why they’d need them, that it might conflict with their beliefs, etc. But the ease with which he can apply a pet economic principle is just too tempting, so he ignores these factors.
Another example is where Easterly asserts that population growth cannot be a problem, because “an additional person is a potential profit opportunity for a person that hires him or her” and as a result “the real wage will adjust until the demand for workers equals the supply.” It’s quite clear things don’t function this smoothly in labour markets even in developed countries; for theoretical reasons as to why, Easterly need look no further than John Maynard Keynes; failing that, modern work on labour market frictions might prove sufficient. Again we see a neat but overly simplistic principle applied when even the economist themselves surely knows better.
So it is not uncommon for economists to prefer their more ‘free market’ principles over nuance when writing for a popular audience**. But is this problem only limited to popular economics? Economists seem to think so; to them, the issue is primarily one of communication, and knowing the limits of your models. This is fine as it goes. However, there are reasons to believe this bias extends into the murky depths of academia.
In my opinion, there is one major culprit of selective application in economics, and it is one that cannot be explained by economists simplifying their work for public consumption: the Lucas Critique. The Lucas Critique suggests that adjusting policy based on observed empirical relationships from the past will alter the conditions under which these observations were generated, hence rendering the relationship obsolete.
Unfortunately, in practice, Lucas’ version of the critique seems to have been used to beat ‘Keynesians’ over the head, rather than being universally applied as a tool to further understanding. To illustrate this, here are some areas I think Lucas critique-style thinking could be applied, but hasn’t:
- Milton Friedman’s methodology. If a ‘black box’ theory corroborates well with past evidence but we aren’t entirely sure the internal mechanics are accurate, there’s no reason to believe the corroboration will hold, or to know how the mechanics of the system will change, if we change policy.
- Nominal GDP Targeting (NGDPT). This hasn’t caught on much on the left (in my opinion, for primarily ideological reasons: it’s anti-Keynesian, it partly absolves the private sector of responsibility for recessions). But it doesn’t seem to have occurred to proponents of NGDPT that we must ask if the relationship between inflation, RGDP and NGDP will break down if we try to exploit it for policy purposes. This is despite the fact that we are talking about precisely the same variables as the Phillips Curve, the primary theory to which the Lucas Critique was initially applied.
- The supposed “deep parameters” of human behaviour on which Lucas suggests we construct economic models, such as technology and preferences. For a neoclassical economist, you are born with a set of preferences and you die with them, while in many models technology is a vaguely defined exogenous parameter. Yet a single example can show that both of these things can change with policy: government investment, which is at the root of a large number of technological break throughs. These break throughs have often resulted in new products, creating preferences that otherwise wouldn’t have existed. A model with fixed, exogenous parameters for technology and preferences is therefore hugely fallible to policy changes.
The fact that the critique hasn’t been applied to these examples leads me to believe it’s often only used to preserve existing economic theory. In fact, the critique itself is really just a narrow version of the more general principle of reflexivity, noted by many before. Reflexivity is an ever-present problem that suggests an evolving relationship between policy and theory, not a principle that means we can fall back on economist’s preferred methods.
Is the Lucas Critique the only culprit? Well, I’ve found economists are generally critical of the assumptions and mechanics of heterodox models, despite appealing to Friedmanite arguments when questioned about their own. I’ve also found economists (okay, one) appeal to how businessmen really behave when defending their theories despite not paying much credence to alternative theories based on the same principle, such as cost-plus pricing. So maybe economists need to air out their theories and principles a bit, rather than simply applying them where it suits them.
Economist’s simple stories often capture some truths, which is why they will defend them to the death. But too often this becomes a matter of protecting a core set of beliefs, and being unwilling to apply them in new ways or even abandon them altogether. So economists end up deferring to their framework when it isn’t appropriate, or only interpreting it in their preferred way, particularly when they communicate their ideas to the public. The result can be that misleading conclusions about the economy remain prominent, even when economist’s own frameworks, interpreted completely, don’t necessarily imply them. Perhaps if economists were more willing to open up their theories, which can sometimes feel like something of a black box, these misinterpretations would be exposed.
**In fairness to Krugman and Easterly, these books were written a while ago, and I’m sure they have updated their positions since then. I only wish to show that economists use this tactic, not that any one economist endorses any particular position.
“Thinking like an economist” is one of those things you’ll see on the pages of every book released during the initial
attack wave of pop economics books starting around 2006. In fact, the authors of such books set out with the explicit aim to educate the average person about the basics of economics: demand and supply, comparative advantage, opportunity cost, cost-benefit analysis, externalities, and of course the most beloved mantras: ‘people respond to incentives‘ and ‘there’s no such thing as a free lunch‘.
The typical economist’s mindset is a logical, dispassionate (though not necessarily uncaring) analyst who weighs up situations and policies using basic principles, bearing in mind there are always trade offs and no perfect solutions. Economists usually weigh things up with efficiency in mind, thinking of equitability as an important but often opposed goal to efficiency, and one that should probably be considered separately (this stems from Kaldor-Hicks efficiency, which suggests that pareto optimal policies can be combined with redistribution policies to produce the best possible outcome in terms of both efficiency and equity. Sadly, in practice this means economists sometimes just advocate the former, with the proviso that the latter could happen, but don’t worry as much as they should about whether the redistribution actually does happen).
There are obviously areas where economist’s toolkit applies. Cost-benefit analyses are appropriate for business plans and plans in other organisations. Opportunity cost is relevant when keeping the weekly shop within a budget: if we buy the biscuits, we won’t have enough for the cereal bars, etc. The economic way of thinking also has unexpected applications: for example, economists have done commendable work in the field of organ donation.
However, problems with the ‘economic way of thinking’ arise under certain circumstances. This is commonly when actions have outcomes that are fundamentally unknown, or are incommensurable. What is the opportunity cost of me writing this blog post? Well, I could be writing a different blog post, but I have no idea which one my readers would prefer. That’s assuming I evaluate blogging solely in terms of one metric, like page views, which obviously isn’t true. Alternatively, I could be reading a book; perhaps I’d get an idea for a better post for that, so over the long term reading would be more fruitful. I could also be sleeping, cooking, at the pub, or any number of things, but weighing up the various trade offs and benefits of these actions ‘like an economist’ is simply not possible.
I believe there are ample examples of economists extending their economist’s toolkit beyond where it is appropriate. I will note that good economists realise the limits of their approach, and would probably not endorse the (sometimes absurd) instances of ‘economic imperialism’ I am about to present:
Politicial science. Economists extended their toolkit to political science with public choice theory, which supposes that politicians and voters are rational self-maximisers who act to further their own interests, be they power, prestige, financial gain or what have you. This found its reductio in Bryan Caplan, who suggested that voters are rationally ignorant of politics because the costs outweigh the benefits, and so economists (who are obviously right about everything) should dictate public policy. You know, like in Chile.
Fortunately, this theory is wrong. Research, the best coming from Leif Lewin, has found that politicians and voters act in what they perceive to be the general interest, not narrow self interest. People vote and act out of a sense of obligation and citizenship, not because of any cost benefit analysis they partake in. Public servants are generally public spirited and less motivated by money than those in the private sector. While special interest groups are a problem, economists are better off turning to political scientists if they want to analyse this further, who have known what I outlined above for a long time.
The environment. Some of economist’s basic tools are easily shown to be absurd when applied to environmental analysis. It is not possible to place a monetary value – economist’s go to unit – on most environment variables. How do we compare the ‘value’ of a lake with the economic costs of a carbon tax? Is there some level of carbon tax at which we would forego every lake on earth rather than apply it? How do we compare, say, the depletion of coal with a rise in the sea level? These things have many different metrics by which they can be judged. The financial metrics used by economists are surely among them, but they are only a small part of the picture.
Another problem arises when looking at possible future environmental outcomes, as probabilities are fundamentally unknowable. Some try to approach the issue of global warming and environmental catastrophe by weighing up probabilities and doing cost-benefit analyses. But how do we propose to calculate the probability of environmental disaster? We don’t have a set of earths we can ‘run’ to evaluate how often catastrophe occurs; climate models display chaotic behaviour that is highly dependent on the accuracy of initial conditions. The fact is that we simply don’t know how likely disaster is, what its impacts will be, and framing it in such a way is deeply misleading. Furthermore, even if the probabilities were known, what matters is not just the weighted relative costs and benefits, but the potential for absolute disaster. If there is a 1% chance the world will end unless we do x, we shouldn’t do a cost-benefit analysis. Instead, assuming x is feasible, we should simply do it.
The law. As Yves Smith details in ECONNED (pp. 124-126), Chicago School economists managed to persuade first legal theorists, and then those involved in the legal system itself, of the efficacy of their way of thinking, eventually forming the ‘law and economics’ school. Since this was Chicago, it will not surprise you to learn that this approach largely consisted of a focus on efficiency over, say, due process, promoted deregulation, and rejected notions of corporate social responsibility. Nor should it surprise you that the movement had a large degree of – ahem – ‘support’ from various moneyed interests.
Theoretically, I find the corporate social responsibility position to be incoherent. Empirically, it’s obvious that the framework economists had a substantial part in setting up has failed. Fraud has risen; the changes in anti-trust have not had the benefits that economists predicted; we had a financial crisis in 2008 as a result of the regulatory framework put in place. Note that this isn’t an ideological point: you can think that the regulation was too loose, too tight, or simply wrongly formulated. But in general, defending the exact thinking and framework that led to the crisis is absurd.
Economists take pride in the seeming versatility and simplicity of their framework, and they are eager to apply it to other social sciences. That economists conclusions are, to quote Keynes, “austere and often unpalatable, len[d them] virtue”, especially when contrasted with less mathematically certain social sciences, such as sociology. But oftentimes economists act to displace existing theories without really considering the existing viewpoint. And oftentimes that existing viewpoint has more to it than economists, trained as they are to see things a certain way, might perceive. Hence, economists should always careful when venturing onto new intellectual turf, as otherwise they risk missing vital insights long known to others, insights to which their framework blinds them.
Now, I suppose, is as appropriate a time as any to discuss the policies generally known as neoliberalism/free market economics: tax and spending cuts, union busting, deregulation, privatisation and free trade, and how they have fared in practice. Unsurprisingly, those on the right defend neoliberalism’s record. However, successes have been over exaggerated, while in cases of clear success, a closer look reveals policies which are anything but ‘neoliberal’. I’ll take a brief look at some countries or sets of countries which are commonly purported to show the success of these policies: the US & UK, Chile, Hong Kong & Singapore, and Scandinavia. I believe that in none of these instances do we get a clear example of neoliberal policies succeeding economically.
The US and UK had similar narratives during their transition to neoliberal policies. After a period of stagflation, a ‘strong’ politician (Ronald Reagan and Margaret Thatcher, respectively) rose who was willing to enact drastic reforms. The narrative here can be exaggerated – pro market reforms (eg deregulation under Carter) and economy-wide trends (the decline of manufacturing) preceded these two governments. Nevetherless, utilities were privatised, unions were weakened, direct taxes (mostly top tax rates and corporation taxes) were slashed, and various regulations were either cut down or replaced with a more ‘neoliberal’ model. Obviously some ‘free market’ purists will always claim it was not enough, but it was a substantial move in the neoliberal direction, and as such we should have seen clear benefits.
Economic growth under these two governments was decidedly average. If we measure from peak to peak in the business cycle to average out fluctuations, per capita growth under Thatcher comes out at 2.44% (1978-88), while Reagan comes out at 2.3% (1979-90). If we just measure the years they were in office, the respective figures are 2.05% and 2.77%. Whichever way you paint it, growth was not far from its 2.5% trend.
In fact, in both countries the ups and downs of the economy surely had more to do with monetary policy than anything else. Interest rates went as high as 17% in the UK and 19% in the US; around 1983 they had more than halved, dropping down to about 8%; following this GDP started to recover. Insofar as policy goes, the conventional story that neoliberal policies rescued their respective countries is a half truth at best. Thatcher benefited from an oil boom which helped her to fund her various preferred programs (including the Falklands War, which helped buy off discontent). Reagan’s policy of cutting taxes but increasing military spending during a recession was effectively Keynesianism. Ultimately, there is little evidence that the headline reforms were responsible for the overall performance of the economy in either country.
Singapore & Hong Kong
These two countries have certainly had impressive peformances over the past few decades, overtaking most developed countries for GDP per capita. For this reason, they are often touted as free market success stories. This is misleading in a couple of ways.
The narrative about the success of any policy in Singapore and Hong Kong is complicated by the fact that they have some obvious advantages over everywhere else, no matter their policies (within reason). First, they are port cities, which means that unless there are serious political problems, they will be a conduit for a large degree of trade no matter their economic policies. Second, they are city states, which reduces administrative and transaction costs, both in the public and private sectors. Third, Hong Kong does not have to fund a military due to protection from China, which helps to explain its low tax rates.
In any case, the two countries are anything but a paragon of the ‘free market’ in action. In Hong Kong, the government owns all of the land. In Singapore, the government owns about 60% of the land, heavily regulating its usage, while government-linked corporations produce up to 60% of GDP. Both countries have public health care, transportation and education, public housing programs and safety nets, and Singapore owns public utilities while Hong Kong regulates them tightly.
Clearly, whatever the success of these countries is caused by, it is not simply ‘free markets’.
The story painted usually painted about Chile is that it went from a poor country to one of the richest in Latin America after ‘free market’ reforms were put in place by the dictator Augusto Pinochet following the 1973 coup d’etat. What actually happened (from a policy perspective) was much more of a mixed bag, combining both neoliberal programs with long-standing state directed ones.
Key industries remained either directly in the hands of the state (such as copper and oil) or in receipt of subsidies, advice and management, and training through the government organisation CORFO (such as forestry and fishing). These state-directed industries experienced massive growth and fueled an export boom, which drove the economy for decades to come. It is true that some industries, such as banking, were privatised and deregulated, but this was far from a success: it produced a financial bubble, which collapsed in 1982, reducing GDP by 14%, back down to where it was in 1970. Only 5 out of the 19 banks that had been privatised remained, (reluctantly) bailed out by the government, which also had to reinstate capital controls and other interventions. Furthermore, once democracy was reinstated in the 1990s, governments moved leftwards and embarked in significant, successful poverty reduction programs.
This is clearly at odds with the idea of Chile as a free market success story. In fact, I’d go so far as to say that in the case of Chile, success was clearly concentrated in areas with obvious state intervention, while failures were concentrated in those without.
Scandinavian countries are a synonym for economic success, faring well in GDP per capita, but even better in overall standard of living indexes. So it is no surprises that both sides of the debate claim them as their own. The claim is more perplexing when coming from the right, however, since it requires them to effectively claim that countries which are clearly social democracies are not social democracies. It is generally asserted that beneath the high tax rates, these countries are ‘economically free’, which roughly translates as lightly regulated. So are they?
Disregarding such nonsensical indexes as Heritage and heading for the more credible OECD, we can see that Scandinavian countries have average to low strength regulatory frameworks by the standards of developed countries:
In case you were wondering, there is no clear correlation between this index and GDP growth.
While, with the exception of Sweden, the Scandinavian countries have below average regulation indexes, if this were causing their success then surely the US, UK and Spain would be doing well, too? Perhaps low regulation must be combined with a strong safety net and public services to work. More likely, the Scandinavian countries are unique and have specific institutions that cannot necessarily be emulated elsewhere, something I’ve argued before.
In fact, that last point is true of every country. The path to development and sustained growth is different for every country, and the recipe for growth cannot be captured in vague platitudes about a ‘free market’, completely devoid of context. I expect that there exist countries where neoliberal reforms are appropriate, but these are far outweighed by one where they are not. The people best suited to decide which reforms are appropriate are those who live in and understand the country, not outsiders with a one size fits all model that they see as a neutral template. This was clear even in Chile, where the national military were reluctant to abandon the state-driven model on which they had always relied.
I expect those who support neoliberalism might look at this article and conclude that countries would do even better if only those last pesky statist policies were removed. But this is a superficial perspective. Why were the state-supported industries much more successful than the privatised ones in Chile? Why do Scandinavian countries do well with high tax rates and big welfare states, when many countries with similar strength regulatory frameworks and smaller welfare states do much worse? Why does every purported ‘free market’ success story collapse under close inspection, and why are there no clear real world examples of the ideal being implemented and working? Until I can see such a case I will remain unconvinced of the virtues of the elusive free market.
It would be silly to suggest that all of neoclassical economics is simply ‘wrong.’ I happen to think much of it is, sure, but some is right, and some may be merely incomplete. However there is another possibility, one I want to focus on in this post: some neoclassical theories are sound only if one defines for them a clear domain. In mathematics, a domain refers to the range of numbers one can feed into a function (what you ‘do’ to the number) and get non-nonsensical (sensical?) answers. Similar rules apply to many scientific theories: the perfect gas model is not appropriate for steam; Newton’s Laws do not apply at very large or very small levels, etc etc.
Economists do already use domains, to a limited extent. This is mostly done in theories of the firm, for which there are different ones depending on the number of firms ranging from perfect competiton to oligopoly through duopoly and finally monopoly. These theories are only supposed to hold in industries with the appropriate number of firms. However, even within these there are few criteria for distinguishing between when a firm will behave, for example, Cournot-y (varying quantity only) and/or Bertrand-y (varying price only), and hence which of these models is appropriate. So economists might still have a hard time knowing when to use which theory. DSGE has similar problems.
One area I’ve been thinking might be more sound if a specific domain – agriculture – were applied to it is marginalist economics: specifically, the much maligned perfectly competitive theory of the firm. It is perhaps no coincidence that economists are rather keen on using examples from agriculture in their parables about marginalist concepts: it’s the area where their analysis is most appropriate. There are a few reasons to believe this:
(1) Agriculture, for the most part, has perfectly divisible inputs and outputs. These are a core assumption of basic producer (and consumer theory), one which is blatantly unrealistic in most cases. However, it may be realistic in agriculture. Food and fertiliser are literally perfectly divisible, as they can always be cut down to smaller quantities; certainly at any level relevant for production. Livestock are not perfectly divisible when alive, but even so they are generally farmed in large quantities that can be continually adjusted, so perfect divisibility is at least a good approximation. Tractors, ploughs etc. are example of indivisibilities, but they are not often purchased and can be thought of as the exception to the rule, covered under ‘fixed capital.’
(2) Diminishing marginal returns. Agriculture is one of the few areas where we observe rising costs as output rises. This is partly because a major factor of production – land – is fixed. This is a standard assumption for the short run neoclassical theory of the firm; with land, it is also true in the long run, though some improvements in productivity can be made over the long run with the aforementioned fixed capital expansions.
(3) Perfect competition. Nothing better resembles the atomistic neoclassical ideal than many farmers competing on a single market with homogeneous goods like wheat, not having any discernible effect on price. With certain foods, some product differentiation (through quality) might be observed but even this would be captured by the theory of imperfect competition. Overall, a farmer is less likely to have discretion over the price of what they sell than, say, a retail store, or a lawyer.
(4) Lack of clustering or ‘QWERTY‘ effects. It is an obvious observation that firms in particular industries tend to cluster together geographically. Manufacturing requires a continuous stream of inputs, so firms at different stages in the supply chain will group together to minimise transaction costs. Manufacturing often – though not always, to be sure – requires workers with a particular set of skills, so employees and employers who best match together will tend to converge. Services, by their nature, requires face-to-face interaction, as well as even more specialised skills, so they too will group together. In both cases the easy transfer of knowledge around clusters also helps significantly. Clusters become self-enforcing: you set up shop in a cluster because everyone else in your industry is there. QWERTY effects create emergent properties that may suggest a role for government intervention.
However, agriculture, in most cases, does not exhibit QWERTY-like characteristics. First, agriculture requires large expanses of land so it is difficult to create ‘clusters.’ Second, most agricultural labour is not particularly specialised. Third, agriculture also follows an obvious harvesting cycle, so rather than a continuous stream of inputs, there are intermittent large purchases of supplies, making transportation costs less of a systemic issue. Fourth, agriculture does not really rely on information about new trends, management, techniques or what have you; it has followed similar techniques for centuries.
The reader might note that I’ve primarily been referring to extensive agriculture, rather than intensive agriculture – market gardens and so forth. Intensive agriculture does exhibit some characteristics similar to extensive farming: it produces the same type of goods, for a start, so much of the above still applies. Nevertheless, the use of technology and organisation is greater than extensive farming, and market gardens generally take up a smaller area, which suggests that the perfectly competitive market may not be appropriate. Modern market farming might be thought as a way to ‘capitalist-ise’ agriculture, hence rendering the perfectly competitive theory inappropriate.
So what are the implications of this, for extensive farming at least? Seemingly, our conclusions will align with the conclusions of basic economic theory. Price controls and subsidies are not advised under normal circumstances or in the name of long term policy goals; a monopoly would probably not be a result of innovation and would be unlikely to be superseded by technology, and so would be unambiguously bad.
Most of all, economists will be pleased to hear that their favourite theory, comparative advantage, is more directly applicable in the world of agriculture. This is for two main reasons. First, the most commonly used rationale for why a country might have ‘comparative advantage’ – resource endowments – is obviously applicable in agriculture: nobody questions why the UK doesn’t try to create a cocoa industry, or why New Jersey doesn’t grow as much wheat as Iowa. Fertility of soil and climate are determined by powers mostly beyond humanity’s control, and we must specialise according to this. Second, unlike manufacturing, short term losses in trade will not strengthen an industry to the point where it is more efficient in the long term.
This is basically a ‘market knows best’ mantra that may not sit well with my regular readers. To be sure, there will still be exceptions where governments might intervene: environmental concerns; ensuring national self sufficiency; emergencies; basic standards. Nevertheless, the disaster that is the CAP, with absurdities such as food mountains and paying farmers not to use their fields, as well as the effect it has on farmers in poor countries, seems to illustrate that if economist’s favourite creeds hold anywhere, it’s in agriculture.
Model-wise, there will still be issues with perfect competition even in agriculture, where it is at its most relevant. I fully expect superior, more comprehensive theories than the perfectly competitive firm can be (and have been) developed for agriculture. Nevertheless, insofar as perfect competition might apply to anything at all, it seems most suited here. It would at least be a start for economists to admit certain theories have only limited application, instead of extrapolating highly restrictive models onto situations where they don’t apply.
Many economists will admit that their models are not, and do not resemble, the real world. Nevertheless, when pushed on this obvious problem, they will assert that reality behaves as if their theories are true. I’m not sure where this puts their theories in terms of falsifiability, but there you have it. The problem I want to highlight here is that, in many ways, the conditions in which economic assumptions are fulfilled are not interesting at all and therefore unworthy of study.
To illustrate this, consider Milton Friedman’s famous exposition of the as if argument. He used the analogy of a snooker player who does not know the geometry of the shots they make, but behaves in close approximation to how they would if they did make the appropriate calculations. We could therefore model the snooker player’s game by using such equations, even though this wouldn’t strictly describe the mechanics of the game.
There is an obvious problem with Friedman’s snooker player analogy: the only reason a snooker game is interesting (in the loosest sense of the word, to be sure) is that players play imperfectly. Were snooker players to calculate everything perfectly, there would be no game; the person who went first would pot every ball and win. Hence, the imperfections are what makes the game interesting, and we must examine the actual processes the player uses to make decisions if we want a realistic model of their play. Something similar could be said for social sciences. The only time someone’s – or society’s – behaviour is really interesting is when it is degenerative, self destructive, irrational. If everyone followed utility functions and maximised their happiness making perfectly fungible trade offs between options on which they had all available information, there would be no economic problem to speak of. The ‘deviations’ are in many ways what makes the study of economics worthwhile.
I am not the first person to recognise the flaw in Friedman’s snooker player analogy. Paul Krugman makes a similar argument in his book Peddling Prosperity. He argues that tiny deviations from rationality – say, a family not bothering to maximise their expenditure after a small tax cut because it’s not worth the time and effort – can lead to massive deviations from an economic theory. The aforementioned example completely invalidates Ricardian Equivalence. Similarly, within standard economic theory, downward wage stickiness opens up a role for monetary and fiscal policy where before there was none.
If such small ‘deviations’ from the ‘ideal’ create such significant effects, what is to be said of other, more significant ‘deviations’? Ones such as how the banking system works; how firms price; behavioural quirks; the fact that marginal products cannot be well-defined; the fact that capital can move across borders, etc etc. These completely undermine the theories upon which economists base their proclamations against the minimum wage, or for NGDP targeting, or for free trade. (Fun homework: match up the policy prescriptions mentioned with the relevant faulty assumptions).
I’ll grant that a lot of contemporary economics involves investigating areas where an assumption – rationality, perfect information, homogeneous agents - is violated. But usually this is only done one at a time, preserving the other assumptions. However, if almost every assumption is always violated, and if each violation has surprisingly large consequences, then practically any theory which retains any of the faulty assumptions will be wildly off track. Consequently, I would suggest that rather than modelling one ‘friction’ at a time, the ‘ideal’ should be dropped completely. Theories could be built from basic empirical observations instead of false assumptions.
I’m actually not entirely happy with this argument, because it implies that the economy would behave ‘well’ if everyone behaved according to economist’s ideals. All too often this can mean economists end up disparaging real people for not conforming to their theories, as Giles Saint-Paul did in his defence of economics post-crisis. The fact is that even if the world did behave according to the (impossible) neoclassical ‘ideal’, there would still be problems, such as business cycles, due to emergent properties of individually optimal behaviour. In any case, economists should be wary of the as if argument even without accepting my crazy heterodox position.
The fact is that reality doesn’t behave ‘as if’ it is economic theory. Reality behaves how reality behaves, and science is supposed to be geared toward modelling this as closely as possible. Insofar as we might rest on a counterfactual, it is only intended when we don’t know how the system actually works. Once we do know how the system works – and in economics, we do, as I outlined above – economists who resist altering their long-outdated heuristics risk avoiding important questions about the economy.
This is a compilation of my objections to the main arguments of right-libertarians (or propertarians) done as an FAQ (based on the fact that my FAQ for economists was pretty popular). I hope here to persuade libertarians that things are more complicated than their framework, neat as it is, implies. Whether it will succeed is another question.
Writing these arguments revealed an interesting recurrence: once the libertarian framework is picked apart, the debate collapses back to where it’s always been. The various binary distinctions libertarians make (voluntary/coercive, government/market, positive/negative liberty) fall apart upon critical inspection, and we then have to take things on a case by case basis in the fuzzy world of morality, trade offs and so forth. It strikes me that the libertarian framework tries to provide easy answers, to side step this debate.
What do you have against liberty? Why do you statists always try to rationalise ways to control our lives?
Slow down! If everyone who criticises you is automatically the bad guy, that doesn’t leave much room for productive debate, does it? For what it’s worth, I’d characterise libertarians as those who are so skeptical of the state that they think it should only protect the most powerful, but that’s no reason to dismiss them as the bad guys before we’ve even started. But more on that later – for now, just try not to assume I am Stalin reincarnated.
But libertarianism is about liberty. What justification do you have for infringing on liberty?
Again, this attitude leaves open the actual question of whether libertarianism really does improve individual liberty. Libertarians generally distinguish between positive and negative liberty, where positive liberty is the freedom to command resources to realise certain ends, while negative freedom is the extent to which one is (or isn’t) constrained by other moral actors. Since a low degree of positive freedom is, unfortunately, imposed by nature, the only things humans as moral actors can do is ensure we don’t restrict people’s negative liberty.
However, this distinction is functionally meaningless. A starving man at a shop cannot take food because he will be arrested or at least kicked out – he is constrained by another moral actor. The libertarian might reply that property rights helped create that resource, so the starving man is no worse off than he would have been without property rights. The my first response to this is “so what?” It doesn’t change the functional relationship between the starving man and the food, and begs the question of whether we can harness the resource-creating power of property rights to create more just outcomes. Or just let the guy have some food through redistribution.
Taxes are theft! Why do you think you can steal from people?
First, it would be easy to turn the question of wealth creation raised in the last section around on libertarians and ask exactly how the government can be said to ‘steal’ resources that its own actions created. Most innovation has its roots in government research and development, and many of the institution upon which capitalism is built are state-backed. These are the facts; going into unverifiable counterfactuals about how things would be better with ‘less’ government is just speculation. The moral question of whether government should ‘intervene’ is undermined by the fact that it already has.
Even more importantly, institutions strongly influence the pretax income distribution. The enforcement of property rights, contracts and the prevention of force, fraud and theft does not avoid significant political decisions. For example, implied contracts are an incredibly tricky area of law; so are intellectual and environmental property rights, where the nature of the property itself raises difficult questions. Ownership of some things (votes, people, identities) is generally prohibited, as are certain contracts (slavery, murder-suicide pacts, anything entered into by children/the mentally ill). All of these decisions, and many more like them, will involve value judgments, historical path dependence, and sometimes arbitrary decisions. And they will all influence patterns of production, distribution and exchange. There is no neutral ‘baseline’ distribution, and there is no way of keeping politics out of distribution. A similar argument can be made about individual choice.
But if distribution results from voluntary actions, then what is the problem?
Obviously, even if decisions are voluntary, they will be influenced by the types of political decisions outlined above. But even beyond that, there are two problems with the ‘voluntarist’ perspective.
The first is the binary distinction between ‘voluntary’ and ‘coerced’ action, which leads to a lot of problems. Using it, I could argue that nobody in the developed world is really ‘forced’ to obey the law, because they could move country. Obviously it would be silly to say this: one can’t expect people to uproot themselves from their family, friends, location and career, so functionally people do not have much choice about obeying laws. Another example of the limitations of the libertarian line of argument is that one could use it to frame the decision not to obey the law as a ‘voluntary trade off’ between, say, prison and the alternative.
A better way to think of the distinction between voluntary and involuntary action is as a spectrum. We might consider the degree to which someone’s action is voluntary as how much it is influenced by factors outside the persons/objects involved in the immediate decision. Under such criteria, few actions can be considered truly ‘voluntary’; there are always outside influences on decisions, however small or large. At the less significant end of the spectrum we might have travel costs; we might then go through peer pressure, then, for workers, the threat of poverty. We would end up at something like the threat of being killed or tortured. The extent to which actions are voluntary must be considered on a case by case basis; we cannot just make a binary distinction and apply one size fits all based policies on this basis.
The second problem with voluntarism is the Nozickean justice principle most libertarians implicitly or explicitly respect. This is based on the idea that if voluntary actions led to a situation, that situation must be just. This problem is perhaps best illustrated within one of Robert Nozick’s own thought experiments: the Wilt Chamberlain example (as it goes, this is also a situation where one could accurately describe the agent’s behaviour as purely voluntary). Nozick suggests that if everybody at a basketball game volunteered to pay Wilt Chamberlain a small amount of money, the end result would be a vastly unequal income distribution, but since everybody had donated ‘voluntarily,’ there would be no problem regarding the justness of the outcome.
But while it is true that everybody at the basketball volunteered to donate their own money, it is not true that they agreed to anyone else donating money, and it is certainly not true that they all agreed to everyone collectively donating a fortune. The principle is actually based on a subtle switch from individually voluntary choices to collectively voluntary ones, one which doesn’t hold up to scrutiny. The libertarian may reply that the choices of others are none of my/other’s/the state’s business. But if the inequality has pernicious effects (which is a separate issue) then it is very much everyone’s business. Since the voluntarist principle cannot be applied collectively, we are back to discussing the effects of inequality. This disparity between individual choices and collective outcomes is the reason we have voting, political movements and so forth to help
Politics? Don’t you know any public choice theory? Democracy is a sham!
Well, modern democracy is probably a sham. But overall, public choice theory is simply refuted by the evidence, something that people do not note nearly often enough. Political scientists have known – and empirically confirmed – that voters and politicians mostly act in what they perceive to be the public interest, rather than for selfish gains. This isn’t to say that there is no truth to public choice theory, but evidence suggests it is more appropriate to model politicians and voters as public servants who are buffeted by special interest than as selfish maximisers who occasionally stumble upon a beneficial policy. The result is that democracy is far more effective a tool for translating collective interests into policy than libertarians might suggest.
But government action, democratic or not, rests on the initiation of force. When is that ever justified?
The special status libertarians accord to ‘force’ falls apart even on its own terms. For the fact is that most laws are not actually enforced by force, but by credible threat of force. These are, by definition, two different things. I know that if I try to go into a night club without permission, the bouncers will stop me or drag me out. This isn’t the same experience, and doesn’t have the same moral implications, as them actually dragging me out when I do run in. The relationship between the individual and the law can also be applied to laws libertarians approve of: to argue that credible threat of force is the same as force is to argue that people are constantly the object of coercion due to what they can and can’t do because of other’s property rights. Overall, the reduction of all laws to someone forcing you to do things at gunpoint is a stretch to say the least.
Regardless of force, governments cannot know better than individuals/the market. So why should they intervene?
The framing of governments versus markets is largely a false dichotomy. I have already noted the inevitable political decisions that go with even what libertarians consider their baseline institutions. Beyond this, there are laws such as immigration, limited liability, laws that define shares and protect shareholders, laws that define companies, and so forth. These so-called ‘interventions’ do not require a government to ‘know better’ than any one individual; they were defined to have a systemic impact that cannot be enforced by any individual or group of individuals. Furthermore, the question of where we draw the line between ‘intervention’ and ‘the market’ is up for debate. Or it doesn’t really exist.
Even if the government backs the institutions required for markets, it sucks wealth out of the economy to do this. Hence, it should do as little as possible, right?
Saying ‘governments can’t create wealth’ is a sweeping, largely vacuous statement based on a superficial zero sum view of taxation as being ‘extracted’ from the private sector. In fact, taxation is just one prong of a symbiotic relationship that exists between the private and public sectors. If we take the definition of wealth as the creation of valuable resources, it’s clear that, say, teaching and infrastructure ‘create wealth.’ We’ve already seen just how large a source of wealth the government can be through its funding of research and development. Furthermore, many state-backed institutions are historically a prerequisite for substantial wealth creation to take place at all. Again, obscure, selectively interpreted examples like Medieval Iceland, or speculative counterfactuals about what things would be like without the government are ahistorical wishful thinking. Give me a clear example of capitalism as we know it coming out of nowhere and I’ll give you the time of day.
That reminds me – you seem to be primarily referencing minarchist libertarians. What about anarcho-capitalism?
Anarcho-capitalist, as far as I’m aware, have yet to answer exactly what a landowner is if not a de facto state. A state is defined over a particular territory, and (theoretically) has control over what happens in that territory. Ownership is also defined as having control over an object; in the case of land, this quite clearly leads to each land owner effectively being a sovereign state, however small. People do not have a ‘choice’ of whether they exist on land, and nobody created land, so there is no justification for those with ‘the biggest gun’ controlling it, while those without land are at their whims.
The extremely unsatisfactory response that, for some reason, everyone would respect the libertarian ideal and not engage in force, fraud and theft is really just wishful thinking. I can’t help but wonder what libertarians would say if a socialist made a similar argument about people suddenly becoming angels under socialism. Similarly, any response that centered on how landowners would be competitively inclined to do Good Things could equally be applied to states, so would be an exercise in special pleading.
OK, maybe you’re not Stalin. Do you have anything else worthwhile to say?
Probably not, but just in case, here are some more of my posts on libertarianism:
See here for more on the flawed positive/negative liberty distinction.
See here for a discussion of the problems with seeing ‘government’ as a homogeneous, all-encompassing entity.
See here for my criticism of libertarian’s perceptions of individual choice.
See here for a more detailed discussion of the faulty government/market dichotomy.