I have only really just started studying Marxism in depth (though I am stopping short of Capital for now). Subsequently, while reading Bertell Ollman‘s Alienation: Marx’s Conception of Man in a Capitalist Society, it once again struck me that (right-)libertarianism is really just lazy Marxism. In many ways libertarianism reads like the first third of Marxism: the area which explores methodological questions and the nature of man. Both libertarianism and Marxism are generally fairly agreeable – and in agreement – in this area, but the former never really fleshes out its arguments satisfactorily. Often I find libertarians, after describing some basic principles (non coercion etc.), make the jump to property rights and capitalism being the bestest thing ever, without fully explaining it.*
I will focus primarily on Robert Nozick and Ludwig von Mises here, as they are the only two libertarians who really explored libertarianism from basic principles of man and his relationship to both nature and economic activity (Murray Rothbard was really an interpretation of Mises in this respect). Overall, I think Nozick and Mises combine to form a fair reflection of minarchist libertarianism.
The state of nature and the nature of man
In Anarchy, State & Utopia, Robert Nozick’s ‘State of Nature’ is one where there is no state (government). He asserts that individuals have rights to protect themselves from aggression, they have rights to the fruits of their labour, and they have the right to cooperate voluntarily, free from deception and theft.
It has always struck me how incomplete Nozick’s exposition of the state of nature is. That man should be a priori free from aggression and entitled to whatever he produces is not really in dispute. What bothers me is that Nozick never really attempts to explore the relationships between different men, between men and society, and between men and nature. For Nozick, an abstract expression of individual rights could be extrapolated up to the whole without much discussion of how things link together. This is especially odd because he demonstrated he was capable of understanding and the limits of such individualism in his incisive critique of methodological individualism. So much the worse for his philosophy that he didn’t apply this thinking to it.
Enter Marx. Marx emphasised that, naturally, man had ‘powers,’ which are the means by which he achieves specific needs. Eating is a power; hunger is the relevant need. Thinking is a power; knowledge is the relevant need. (The former is a ‘natural power,’ common to all animals; the latter is a ‘species power,’ specific to man). By exercising different powers, the individual emphasises different aspects of themselves, and depending on who they are with, which society they are born into, and their available resources, different aspects of the individual will appear to be important, and different conceptions of freedom, happiness, and even the individual himself will emerge.
This may seem like a digression, but in fact it is essential. Once you have established that the abstract individual, when interacting with society, with others, is a very different beast to a lone man in the woods, it leads you down a different ethical path. What becomes important are the interactions the individual engages in, rather than merely the individual himself. It is not enough merely to say an individual should be granted certain rights and that’s that; we have to explore how these rights affect the individual, even by virtue of being defined.
To define every man as an island who cooperates with society and others only through discrete voluntary actions is to diminish the importance of how society and others shape these actions. More than this, it ignores how the rights themselves interact to produce outcomes that may be inconsistent with the principles upon which those same rights, in abstract, were built. Libertarians will likely think I am about to suggest we strip individuals of their rights, but this is not the point. The point is that the rights are not a neutral baseline, and the emergent relations governing these rights could be opposed to individual freedom.
For example, private property is surely the foundation of libertarianism (private property is to be distinguished from possession, btw). But Marx did not think private property, the division of labour, wage labour and capitalist exchanges could ever take place independently; one necessarily implied the other. Any degree of material wealth that qualifies as ‘property’ implies accumulation, which implies producing more than one labourer can manage, which implies employing others, which implies splitting up their tasks into specific, repetitive actions, which implies that what they produce is not necessarily what they need to survive, which implies they must purchase this elsewhere, and so forth. Adam Smith observed this interrelation when he noted that, “as it is the power of exchanging that gives occasion to the division of labour, so the extent of this division must always be limited by the…extent of the market.” I will explore why this may be undesirable from the point of view of individual freedom below, but for now it is sufficient to show that such an emergent property amounts to more than the individual rights from which it originates.
Purposeful action is productive action (which is why capitalism sucks)
Mises claimed man acts to attain certain ends, and only by achieving these ends can he be said to engage in purposive action. If there were no ends to be sought, man would not act; that he acts tells us he has unfulfilled needs. Voluntary exchange gives man the choice and ability to engage in purposeful action with an ever-expanding range of ends at his disposal. The entrepreneur’s role in this is vital, as he channels the purposive actions of many people in the market place, allowing them to attain the ends they seek. This creates an evolutionary process through which man continually realises his chosen ends.
Marx too believed that only man is capable of purposive activity, and this is what separates man from other animals. However, for Marx, the most purposive activity was labour, not consumption. Man engages in productive activity for two main purposes: (1) the end product of his labour and (2) the ability to exercise certain powers of his choosing when labouring, for whichever reason he deems appropriate (efficiency, enjoyment of the task itself, development of skills, etc.). Marx saw capitalism as alienating because in a capitalist system, the individual becomes separated from both the product and the method of production, as well as the time and location in which it takes place.
This separation can be illustrated by an exchange between the worker and the capitalist. The capitalist pays the worker wages so the worker will produce what the capitalist requires him to produce. In this exchange, the worker becomes separated from the product of his labour, producing not what he wants, but what the capitalist requires him to produce. The worker is also required to produce not how he chooses, but at a time, location and in a manner chosen by the capitalist. The worker then uses the wages he earns to purchase other products produced under similar circumstances. The end result under capitalism is that individuals become primarily tied together by what the capitalist guided division of labour demands, rather than by their own autonomous, purposive action. The result is the worker’s alienation from his own labour and also from the products he purchases (this applies to the capitalists too, in a different form; after all, they are on the flip side of the relationship).
So we have two competing narratives here. In one narrative, the individual is merely at the whims of capitalism, while in the other narrative, the individual exercises control over capitalism. Which is more accurate? Ultimately, the question boils down to whether production or consumption is the more purposive activity.
In consumption, the means is exchange, which requires little in the way of personal development or planning, and is brief. What matters most in consumption is the end result: a good or service. Many goods purchased are interchangeable and the act of consumption is relatively brief.** Services are by definition done by somebody else, and generally speaking, the buyer is only interested in the end result (the outcome of a lawsuit; their health; a new conservatory). I’m not suggesting that purchasing goods and services is not useful and does not yield any positive results; I am merely pointing out that as far as man’s self-actualisation goes, as far as purposive action is defined, consumption does not require or achieve much in the way of planning, personal development or uniqueness.
In contrast, during production the individual has both means (productive activity) and an end (the product) in mind when he sets out to act. The productive activity itself cannot be separated from the individual and so the two are inextricably intertwined. Furthermore, productive activity requires and/or results in building up some personal attribute, whether a individual’s capacity to reason, his physical strength and fitness, his perseverance or anything else. Generally these attributes will last beyond the original act of production. The end result is both that the individual achieves some goal he chose, planned and set out to achieve, whatever its exact nature, and that through the process he exercises his individualism by realising certain powers (again chosen by him).
The question for Miseans is how exactly the individual can “discover causal relations” between his purposive productive activity and what he produces if he is not producing what he wants, but doing it under the command of someone else. Mises glosses over the role of the worker in his exposition of purposive action; in fact, he explicitly rejects the notion that labour can be considered ‘action,’ because he considers only ends, rather than means, important for man’s individual development. But are any of man’s actions as rational, as explicitly thought through, as deliberate and purposeful, as labour? For Marx, the tragedy was when labour became a means to an end; Mises merely assumed this was the case.
The heart of libertarianism is the abstract individual, who engages in voluntary actions to attain certain ends, and should be allowed to do this, free from outside interference. But such an abstract philosophy is incomplete and incoherent. In the mainstream, Marx is often projected as disregarding the individual, but in fact, Marx was always highly concerned with the individual. The difference is that Marx’s concern with the individual caused him to zoom out to see the context in which the individual operates, and which aspects of an individual’s character are shaped by the context in which the individual labours. Under capitalism, the most important aspect of purposeful individual action – production – is subsumed, under the command of somebody else, and spurred only by the fact that the work is necessary for the worker’s survival.*** Hence, within his most purposive sphere, the individual is not free to act to realise his own ends through means chosen by him; rather, both the ends and the means are determined by forces outside his control. To me, this doesn’t seem very libertarian.
*To be sure, libertarians do have plenty of fleshed out arguments for capitalism’s efficacy as a system; what I am arguing is that it does not follow from their discussion of man and his nature.
**This has the exception of durables, but how often is the joy of these based on one’s own work on them? Cars, houses and gardens are all the pride and joy of people precisely because they themselves engage in productive activity on them.
***We must remember the context (!) in which Marx was writing. What he says was literally true at the time; in modern liberal democracies the reality is less stark, but the underlying mechanics of working life, and why people work, remain the same.
PS I have used ‘man’ in this post because that is generally what was used by the thinkers I am discussing. I originally tried it with gender-neutral pronouns but it just became confused and more difficult to relate to the original texts.
It doesn’t make any difference how beautiful the hypothesis (conclusion) is, how smart the author is, or what the author’s name is, if it disagrees with data or observations, it is wrong.
- Richard Feynmann
Our empirical criterion for a series of theories is that it should produce new facts. The idea of growth and the concept of empirical character are soldered into one.
- Imre Lakatos
A remarkable characteristic of economics is the sheer staying power of theories, even with a lack of empirical evidence to corroborate the propositions of these theories. In my experience, it is not uncommon for lecturers to remark that the lack of evidence for a theory has been a ‘problem’ for economists (though apparently not enough of a problem for them to throw out said theory). Often textbooks, lectures and discussions of theory make no reference to evidence whatsoever, and where they do it is trivial (for example, representative agent intertemporal macroeconomic theory predicts that governments will run periods of deficits followed by periods of surplus).
In the paragraphs that follow, I’ll examine a few cases of where I believe economics has gone off the mark in this respect. Specifically, I evaluate Marginal Productivity Theory, Walrasian Equilibrium, and The Solow Growth Model. I avoid theories such Real Business Cycle models and the Efficient Markets Hypothesis, partly because they have been done to death, but more importantly to demonstrate that the bad theories in economics are not merely the result of a few ‘wild cards’ at Chicago. On the contrary, I believe an anti-empirical approach is institutionalised within mainstream economics and that economics must undergo a paradigmatic shift to move away from these theories.
Marginal Productivity Theory (MPT)
The common interpretation of MPT is that it predicts workers will be paid ‘what they’re worth.’ In fact, this is not correct; the theory predicts that average productivity of workers will be positively related to wages, rather than each worker getting precisely their ‘just desserts.’ In any case, the result is that MPT predicts that compensation will increase as productivity increases. Hence, graphs such as this one – which you have likely seen before – pose a problem for MPT:
I have seen several responses to the problems presented by graphs like this. The first is that non-wage benefits have risen, which isn’t shown in this data. The second is that the adjustments for relative prices have been incorrectly applied, and consumers have more purchasing power than it first seems. However, estimates exist which take all of these things into account, and they still come to the same conclusions: most people’s overall real compensation is not increasing, even though their productivity is.
Another response would be that marginal productivity did well until the 70s, so maybe it remains useful. This is special pleading. A theory must be equipped to explain all phenomenon within its domain (in this case the labour market), rather than selectively applied where it suits the economist. If the laws of physics suddenly stopped working, can you imagine physicists making this defence? Saying ‘MPT will work except when it doesn’t and if it doesn’t we will throw our hands up in the air and carry on’ is not science. The fact is that such a sudden and clear decoupling of wages and productivity poses a clear problem for advocates of MPT, one which requires either a thorough explanation or discarding the theory altogether.
Walrasian equilibrium is one of the more absurd pieces of theory in economics (which is saying something). There are two (rational) agents with endowments of two factors of production, which they hire out to two profit-maximising producers. The producers use these factors of production to create two consumer goods, then the consumers purchase them. Everyone behaves as if they are perfectly competitive (they can’t influence prices) and everything happens simultaneously. There is no direct trade; instead individuals trade through the market (which comes from
god outside the model).
The behaviour of consumers in this model is tautological. They consume based on a predetermined utility function that cannot be observed. Hence, they consume what they were always going to consume based on the chosen, non-empirical parameters of the model. This doesn’t tell us anything.
The behaviour of producers in this model is observable in the real world and hence not tautological. It is also not what happens in the real world. Some firms maximise profits, but most don’t; those firms that do maximise profits equate MC and MR is clearly false.
The only prediction this model as a whole makes is that the initial distribution of endowments will affect what is produced, how it is distributed, how much is produced and the price of what is produced. In other words: the initial resource distribution of a market economy affects its subsequent workings. This is trivial, and easily shown by theories that are based on more realistic assumptions (such as Sraffa).
The Solow Growth Model
The Solow model, to me, seems to be a textbook case of ‘bad science.’ This is clear from the story of its development (a story anyone who has taken development or macroeconomics will know).*
The Solow model predicts that, due to diminishing returns to capital, developing countries will catch up with developed countries in terms of GDP. At a low level of capital stock, the potential returns to investment are high (e.g. irrigating/ploughing a previously unkempt field). As the stock of capital increases, the returns to investment decrease and the growth rate of a country balances out. Hence, all countries will converge to a similar long term growth rate.
That this prediction is false is no longer debated. In the 1980s, William Baumol provided evidence that seemed to support the hypothesis. This was quickly disputed by Brad Delong, who noted that Baumol had used a sampling bias – he only included countries which were developed, effectively assuming his conclusion. Delong included more countries and found no evidence of convergence.
However, economists weren’t ready to give up. The prediction of the Solow model was reframed as conditional convergence: that is, provided countries have the right institutions, social cohesion, etc. they will converge in terms of growth. This, to me, seems trivial. The entire point of development economics is that the conditions in poor countries are not conducive for them to develop and so catch up with the developed countries. The Solow model doesn’t ask how a country might achieve this, but only says that it is a necessary condition for development, something development economists have always known. Hence, the Solow model is irrelevant for the immediate problem of development economics, which is how exactly we can help poverty-stricken countries get off the ground.
Is Economics That Bad?
In the interests of balance, it is worth noting some predictions made in economics that have been either empirically verified or dropped subsequent to falsification. Quantity of money targeting was tried, and failed, in a few countries, which led to Milton Friedman himself repudiating it (though economists still erroneously use the same framework which led to it). The lifetime consumption hypothesis (and non-utility based consumer theory in general) display good empirical corroboration and have all the hallmarks of a ‘good‘ scientific approach. The Phillips Curve as used by economists was modified in light of evidence in the 1970s. Both the multiplier and the Giffen Good are good examples of non-trivial, clear, falsifiable predictions, though I will not comment on evidence for them because that would take a post for each one.
Nevertheless, the record as a whole is not good. Theories from over a century ago look, and are taught, the same way as they were when they were initially adopted. New ideas that are not even disputed by economists, such as behavioural economics, are slow to be adopted, and when they are adopted are presented as a ‘special case’ and in a way amenable to the core framework, which is, of course, still taught alongside them. As far as I’m aware, there is no clear cut case of a neoclassical theory being completely thrown out and never mentioned again. This alone should be an indicator that the scientific method is not at work in economics.
Commenter Dan thinks economics has not yet found its watershed moment:
Think about Biology before DNA was discovered or Geology before plate tectonics was understood, both disciplines had learned a lot but they still lacked a comprehensive model that made everything fit into place.
I am sympathetic with this viewpoint. Heterodox criticisms come at economists thick and fast – personally, I think most of these criticisms are valid and very little of neoclassical economics should be left. Yet neoclassical economics persists.
However, in my opinion this isn’t because economics lacks a unifying theory; it’s the exact opposite. Economists already think they have found a unifying concept: namely, the optimising agent. Consumers maximise utility; producers maximise profits; politicians maximise their own interests/their ability to get reelected. Sure, there are a few constraints on this behaviour, but overall it is the best starting point. It all blends together into a coherent theory that can tell a plausible story about the economy. I find economists are resistant to any theory that doesn’t follow this methodology.
The typical definition of economics is the study of how resources are allocated. Hence, a unifying theory should empirically and logically do a satisfactory job of explaining prices, production and distribution. Such a theory would be able to underlie virtually any economic model in some form, whether being the wider context of a microeconomic phenomenon, or the basis of macroeconomic phenomenon. No easy task, then, but luckily many approaches of this nature already exist.
Alternative Theories of Behaviour
If we want to stick with agent-based explanations of the economy, there are any number of alternatives to the ‘optimising’ agent. Among these are:
I consider all of these approaches useful, but none of them sufficient for the task at hand.
In the case of the first two, replacing ‘optimising’ agents with ‘satisficing’ agents isn’t exactly revolutionary. Maslow’s hierarchy can, in fact, work as a utility function. In both cases, we still run into similar problems of aggregation and of reductionism. And we end up trying to shoehorn every decision into a particular approach. The simple truth is that agents have a lot of different motivations for their actions and sometimes these aren’t always clear, even to them.
My main issue with these, and any agent based approach, is that they aren’t necessarily relevant for the wider question of resource allocation in society. Individualist-based neoclassical economics has to reduce things down to a few agents with only a few goods in order to have any conclusions whatsoever; I can’t help but feel similar problems would emerge here. Class struggle may determine distribution but it doesn’t tell us much about what is produced and at what price it is sold. In order to understand how production takes place and prices are determined, we will have to look elsewhere.
A Theory of Value
The value approach has a lot of pluses. A theory of value underpins the explanation of relative prices, and also has normative implications that recognize the inevitable value judgments in economics. The only problem I have here is that I’ve yet to find a convincing theory of value – the two most widely known are the neoclassical/Austrian subjective theory of value and the Labour Theory of Value (LTV).
I object to the idea that prices merely reflect subjective valuations for the basic reason of circularity: prices must be calculated before subjective valuation takes place, so they cannot purely reflect subjective values.
I have more sympathy with the LTV (mostly because its proponents seem to have coherent responses to every criticism thrown at it), but I remain unconvinced. The defences of the labour theory of value tend to rest on appeals to ‘the long run’ and ‘averages” of socially necessary labour time. These may be useful, but, like the neoclassical ‘long run’ approach they seem to leave open the immediate question of what’s going on in the economy and what we can do about it.
In my opinion, these approaches both contain some validity, and are not mutually exclusive. I tend to agree with Richard Wolff, who asserts that suggesting one has refuted the other is like saying knives & forks have refuted chopsticks. Both are useful; neither are all-encompassing theories. I also believe both are compatible, to some degree, with my favoured approach:
The ‘Reproduction and Surplus’ Theory
This approach is the one emphasised by Sraffians and Classical Economists. It starts from the basic observation that society must reproduce itself to survive, and that generally society manages this, plus a surplus. The reproductive approach emphasises what I believe to be an important aspect of capitalism, and perhaps all systems: the collective nature of production. Industries are interdependent; people work in teams; various institutions, often state-backed or provided, underlie all of this. Hence, no special moral status is accorded to prices or the allocation of surplus, except that prices must be appropriate for the continued existence of industries and society as a whole.
On first inspection the ‘insight’ that society must reproduce itself might be considered trivial, but following through its implications can yield interesting and useful conclusions. The framework can be used to determine prices technically, independently of either preferences or values. It emphasises the interdependent nature of the economy: if one industry or input fails, it has severe knock on effects. For this reason, it would do a great job of explaining both the oil shocks and resultant stagflation of the 1970s and the 2008 financial crisis, something modern macroeconomics cannot manage.
On top of this, the model is versatile: it can interact with its institutional environment, which determines key variables exogenously (e.g. the monetary system determines interest rates, political power determines distribution). The classical approach is, for example, compatible with class theories of income distribution, post-Keynesian theories of endogenous money and mark-up pricing, and even neoclassical utility maximising individuals! Probably the most promising and complete framework out of them all – I look forward to further developments of this approach.
It is feasible that the task of finding a watershed moment is not possible in the fuzzy world of social sciences. Psychology and sociology are both characterised by competing approaches; psychology in particular has improved since the
neoclassicals Freudians were dethroned. If neoclassical economics has taught us nothing else, it’s the importance of not being trapped by particular theories for want of elegance, which is why there is a lot to commend in the institutional school of economics.
Nevertheless, I think there is scope for exploring unifying principles. Progress in neurology may provide such a foundation for psychology; similarly, ideas such as societal reproduction could equally be applied to sociological concepts such as the role of beliefs, class, sports or what have you. As far as economics goes, such a substantial step forward could be what’s required to displace neoclassical economics, whose staying power, in my opinion, cannot be accorded to either its empirical relevance or its internal consistency. Perhaps neoclassical economics persists simply because its building blocks are so well defined that other approaches seem too incomplete to offer their opponents sure footing.
In mainstream economic models, consumer’s behaviour is generally assumed to follow a ‘utility function.‘ Consumers derive utility (creatively measured in ‘utils’) from whatever they consume, and they will attempt to maximise this subject to their budget constraint – and, perhaps, at a later level, some extra terms to incorporate behavioural quirks, social pressure or what have you. Unfortunately, even with modifications, the concept of utility is an explanation of behaviour that is questionable at best.
The first conundrum – as posed in the title of this post - is exactly what form utility takes. Is it supposed to be some sort of cumulative attribute that people collect as they go through life, like a stat on a video game? Or is it a temporary sensation experienced after consumption, so that economic agents are effectively utility junkies, chasing around temporary highs? There may be a case for regarding anyone who truly maximised utility as clinically insane and in need of help. In any case, thoughtlessly following predetermined utility functions leaves neoclassical agents with no real room for ‘choice’ – we know what their behaviour will be in advance, and it is unchangeable.
There is also the problem of fungibility: is it fair to suggest that joining a gym gives someone the same kind of satisfaction as eating a donut? Or that eating a donut gives the same feeling as owning a car? These nuances are lost in the aggregated world of ‘utils,’ a unit which has no relation to anything else and hence is hard to verify – at its worst, utility is simply circular: only measurable by the same behaviour it supposedly explains.
Economists have a standard response to contentions that utility is unrealistic. They will assert that, even though utility doesn’t really ‘exist’ – a position few would endorse, surely – it still follows that if preferences follow economist’s axioms, then an effective utility function can be derived. That is: utility is not meant to be taken literary, but economist’s assumptions are sufficient to ensure a relationship between preferences that is functionally the same thing. So it would appear the only way out for opponents of utility is to critique the axioms. I don’t believe this is true, but the axioms are worth critiquing before I explain why.
The two most important axioms required to derive a basic utility function are completeness and transitivity. There are other axioms that are also commonly used – independence, non-satiation, convexity - which are all vulnerable to criticisms, but since they pertain to the the exact form of a utility function, rather than the concept as a whole, I will focus only on completeness and transitivity. Without these, there is no utility function, whichever way you paint it.
The first axiom – completeness – is the idea that all relevant decisions can be definitively compared to one another: that is, there is no room for ‘I don’t know.’ There are clear problems with this. Often, it is hard to choose between two options, particularly if one is a bundle of many goods (e.g. two shopping baskets). In fact, as a decision rule this is generally computationally impossible. So people may act based on chance or impulse; they may seek advice or ask someone else to make the decision for them. What’s more, often people find it difficult to evaluate choices even after they’ve made them. Sometimes there is no ‘correct’ choice!
The other axiom, transitivity, implies that people will be consistent in their ordering of preferences. If I prefer A to B, and B to C, I will prefer A to C. It is an important axiom because, even if preferences are complete, a violation of transitivity means that utility functions can basically have any shape and therefore be pretty useless for clear calculations. While I expect numerous behavioural quirks suggest transitivity may be violated under certain circumstances, overall it is a fair axiom – for the individual. However, it has been known for some time that, once we have more than two agents, it becomes impossible to establish a clear, consistent ordering of preferences for the group. This isn’t moving the goalposts: it is highly relevant when we are using representative agents for the entire economy. (This problem also applies to the aforementioned independence axiom).
My most important point, though, is that even if preferences do follow all the axioms, utility is still highly flawed. This is because, like so many neoclassical models, all utility functions give us is a static snapshot of the economy (or individual) at a particular point in time, and there is no room for change. The simple fact that preferences are highly volatile and will be different in the morning and the evening, or in summer and winter, is enough to render utility useless for practical questions about the economy, which must surely incorporate time. Similarly, preference reversal has shown that the way options are presented has a large impact on the choice made by somebody, suggesting again that underlying ‘preferences’ are highly subject to change, and not really useful for the practical purpose of predicting behaviour. One can only wonder how utility might deal with a theory such as multiple selves, which would surely create the aforementioned aggregation problems for preferences, but for one person!
Now, I can almost hear the cries of “ah, but what is your alternative?” Actually, that doesn’t matter for the immediate critique. If I have a map of London Underground and I’m in New York, I’m not going to use it (even less so if I have a map of a fantasy land that exists only in the minds of economists). To push the analogy a little further, it is worth asking what I would do in this situation. I can think of two possibilities: either ask for help, or follow some simple rules of thumb based on what knowledge I have. This is the strategy economists should adopt.
In the case of ‘asking for help,’ what I mean is that economists should turn to other social sciences; namely, psychology, which has a far more empirically driven methodology than economics and has numerous explanations of behaviour. Economists truly interested in understanding human behaviour – rather than preserving their favoured assumptions – should collaborate with psychologists to create sound behavioural foundations.
Until then, economists should be content with simple empirically observed rules of thumb and intuitive aggregate relationships (they already do this with the marginal propensity to consume). Objections of ‘but Lucas Critique‘ are special pleading, since preferences are also liable to change with political decisions. In fact, I’d shout ‘Lucas Critique’ right back at economists, and suggest that they spend less time on the impossible task of making their models ‘immune’ to the Lucas Critique, while spending more evaluating the ever-changing relationship between policy and observation. It is better for economists to be vaguely right than precisely wrong.
Out of all the concepts in neoclassical economics, none is more imaginary, absurd and empirically falsified than utility. Economists supposedly follow a methodology of strict positivism, and based on the experimental evidence against utility, there is surely no reason to keep it. Yet for some reason, it doesn’t seem to attract the same level of criticism as other areas of neoclassical economics. Personally, I am puzzled as to why.
Neoclassical (and Austrian) economics as a whole tend to emphasise market forces as the dominant determinant of employment, distribution and output in the economy. In the neoclassical theory of the firm, firms are something of a ‘black box,’ inside which uniform inputs are uniformly processed into uniform outputs. The firm can be thought of as an agent – or collection of agents – maximising some goal subject to resource constraints. The most commonly used version of this is the perfectly competitive firm, which treats prices as a given, has the single aim of maximising profits and makes ‘normal’ profits. However, even more elaborate theories of the firm – such as ones where managers have objectives that conflict with those of shareholders – retain the standard assumption that the exhibited behaviour of the firm can be deduced from the behaviour of optimising agents inside it. Firms are rarely assumed to have internal differences in production techniques. Instead, they simply serve as channels, coordinated by supply and demand, through which resources are allocated.
But there is good reason to believe producers, rather than the impartial ‘laws’ of demand and supply, are the dominant force in an economy. It is a stretch to suggest that products are merely the expression of consumer preferences; after all, consumers rarely have input directly into the production process. Products are created by a firm and the consumers role is passive in that they can only choose whether or not to reject it. There also exists a power asymmetry between producers and consumers (and workers): since producers are the ones who own the products, they can ‘hold out’ for longer than those without. A capitalist alone can subsist; one who is merely a worker and/or consumer relies on the capitalist(s) for employment, goods and services.
In neoclassical theory, any deviations from perfect competition, and even the mere existence of firms, is thought to be either a source of inefficiencies, a result of them, or both. Hence, an ‘ideal,’ Pareto Efficient economy is thought to be one of perfectly competitive, tiny firms, which have no individual impact on the market in which they are situated. Any questions asked about firms proceed from the premise of which ‘frictions’ we can blame for the observed real world deviation from this ideal.
Several questions about how firms work and their role in the economy are never asked in neoclassical economics. First, what really goes on inside the firm: how are organisation and management used to impact the ability of the firm to convert inputs into outputs? Second, what is the nature of ‘market power?’ Could it be that some industries are so characterised by ‘market power’ that it no longer makes sense to talk about ‘the market’ as a meaningful concept? Third, to what extend do ‘imperfections’ such as these – organisation, market power, scale – actually create beneficial effects that we would not observe in the world of perfect competition?
I believe that, under contemporary capitalism, firms have such an impact that it makes more sense to use ‘the firm’ as an epistemological starting point than ‘the market.’ I also believe that, at least from material point of view, large firms are probably a superior system to one resembling the perfectly competitive ’ideal.’
The competitive ideal seems illogical when applied to the real world. Market forces can be inherently uncertain and costly to adjust to. Any firm which is wholly subservient to market forces, and hence has no control over its future, is simply a terrible firm, and a poor prospect for any potential investor, shareholder or worker. Even consumers prefer an established brand they can trust, at least in the absence of regulation. Hence, no firm would go to an investor, shareholder or bank and say “I have a product, let’s see whether the market likes it or not;” what is expected is a clear strategy.
It would, of course, be wrong to suggest that firms are not under threat from the development on new technologies and from the demands of consumers. Even well established companies go bankrupt from time to time. Nevertheless, many firms persist for a long time, either because their position is that strong or because they insure themselves against market forces. Research and development can ensure a firm always has something new to offer and can adapt, should demand for a product fall or a new release flop. Horizontal integration - selling different products, perhaps in entirely different markets – can broaden a firm’s consumer base (Google’s massive diversification over the last decade is an example of this). Brand proliferation – the same firm creating multiple brands – can serve a similar purpose (think of the different cereals produced by Kellogg).
The challenge for a firm is to establish a degree of control over its respective market; the degree to which it manages this will be a determinant of its success; its ‘competitive advantage.’ Hence, many of a firm’s actions have the purpose of cementing that firm’s position in the marketplace, rather than simply responding passively to outside market forces. Numerous behaviours exhibited by firms support this idea:
- Some firms seek market share as opposed to profits: they want to make sure they have sufficient control over their industry.
- Prices don’t change constantly depending on the state of the economy; firms keep them the same for long periods of time to save money when performing calculations and to be able to produce projections.
- Branding, advertising, marketing and various offers are used to gain and retain customers so that the firm has at least a minimum flow of demand it can rely on. The mantra that it is far more costly to acquire new customers than retain old ones is well known; hence, firms try to make sure they are as unaffected by the whims of consumers as possible.
- Firms control supply through deals, perhaps exclusive, with suppliers; better yet, they can establish control of the supply chain themselves (vertical integration).
Firms also need to establish control over the labour market, as they often rely on the commitment of workers to a specific position in their organisation. The fact that knowledge and skills are often organisation-specific makes the cost of leaving – for both employer and employee – higher, and this effect becomes more amplified as one moves up the hierarchy of the organisation. The result is that the cost of even one worker leaving are often estimated to be well above their salary. It is no use starting a project if you know that, half way through, your manager – with his unique knowledge of what is going on – will just leave and work elsewhere. So firms retain workers with promises of career progression and rewards, as well as establishing a psychological commitment to their organisation. The most extreme example of this is the Japanese ‘employment for life‘ approach, which has proved to be remarkably competitive; moreso than many of its western counterparts.
The existence of long lived companies with significant influence over forces supposedly determined by ‘the market’ creates another problem for the state-market dichotomy. Many companies are economically bigger than countries or state and local governments, and hence their decisions have considerable political implications. A large company setting up shop in a small town, or even small country can significantly alter the landscape there.
Unfortunately, many supporter of “free markets” are driven to defending the actions of large, centralised entities as apolitical. This perspective is based on the false premise that they are simple conduits for scarcity and consumer preferences, rather than actively determining and influencing these things. It is clear this influence has largely driven us away from what economists typically mean when they speak of a ‘market.’ The implication is that, whatever you want to call the current system, it is vital that the entities which characterise it, with their significant impact on production, distribution and exchange, should be put into the political spotlight.
I have previously referenced my support for the idea, advocated by Keynes and Adam Smith, that low long term interest rates are a desirable stance for monetary policy. The claim about the effect of low rates is two fold:
(1) Low rates reduce the cost of investment and so encourage it.
(2) Low rates reduce the yields required to pay back debt incurred, and hence encourage more sustainable, less speculative investments. To phrase it conversely: high rates push people into speculation as they attempt to recoup the money they owe.
Commenter Roman P. is not convinced by this argument. I am willing to admit I have, thus far, provided insufficient evidence for this, mostly due to lack of data. However, I have assembled what data I can below, and believe it offers broad – though not definitive – support for this hypothesis.
A few caveats. First, let me establish clear criteria for what I consider to be ‘low rates.’ John Maynard Keynes wanted the long term interest rate to be as low as 2.5%; he even remarked that 3.5% would be too high for full employment:
There is, surely, overwhelming evidence that even the present reduced rate of 3½ per cent on long-term gilt-edged stocks is far above the equilibrium level – meaning by ‘equilibrium’ the rate which is compatible with the full employment of our resources of men and equipment.
For most of the data, the rate is above even the 5% that Adam Smith thought should be the cap, lest the capital of a country be “wasted.” Obviously we shouldn’t believe something simply because Keynes and Smith did, but hopefully the evidence I present below will lend some credibility to their arguments.
Second, what matters will not be just the interest rate; expectations – and the realised trajectory – of the interest rate will also be important. If the rate is rising then it will have a similar impact on investment decisions as an already high rate. If the Central Bank (CB) is committed to a policy of low rates, then it will be far more stabilizing than if rates happen to hit a low point and subsequently bounce back. We do have a test for an explicit low rate policy: the post-WW2 arrangements. It is common knowledge that the stability in that period was unprecedented.*
Third, let me make obligatory correlation =/= causation remarks. Nevertheless, correlation at least gives us a clue about causation. A further clue is if what we think is the causal variable (interest rate) moves first, and the dependent variable (growth) moves second. It is also true that we have a valid theoretical link for our causation. Lastly, it is empirically verified that businesses consider long term rates the most important interest rate in their borrowing decisions.
So what does the evidence look like? Let’s start by taking a look at the ‘Prime Loan Rate’ in the US for the second half of the 20th century. This is the interest rate banks offer to their most stable customers, mostly big businesses:
Every single recession is preceded by an increase in rates. Not every rise in interest rates create a recession – there is one peak without a recession from around 1983-4. However, this may well be explained by movements in the base rate; it dropped from 11% to 8% in that period. By the next recession it had settled at about 6%; that recession seems to have ended when it was reduced down to 3%.
The data for the prime rate only go as far back as 1955, so I’ll use two of Moody’s corporate bond measures for the first half of the 20th century:
Again we observe a similar pattern with rate increases and recessions. Furthermore, the high rate, high volatility period between WW1 and WW2 sits in stark contrast with the low rate, low volatility period post-WW2. It’s interesting to note that rates – though high, relative to our benchmark of 2.5% – were not that high during the stock market boom of the 1920s. Certainly the spike in rates after the first crash is what seemed to bury the economy.
Update: commenter Magpie helpfully pointed out that the Moody’s data could be lagged, which is why it falls inside recessions instead of before them. Indeed, this is what we see when we compare it to the prime rate post-WW2: the spikes are late.
Overall, it seems high or rising rates accompany periods of substantial periods of economic turmoil, else periods where speculation is rampant and bubbles are building up. It is possible the speculation fuels further rises in the interest rate as the perpetrators become overconfident about their potential gains – a positive feedback loop.
Clearly the central bank does not control corporate borrowing rates directly. However, it does control government bond rates, and I would argue that this rate, as a benchmark, has a significant impact on other interest rates in the economy. Indeed this is borne out by the data:
(For a more comprehensive, but uglier, graph of the correlation between government and corporate bond yields, see here).
A central bank committed to low rates could help quell this, as we observe in the data post-WW2. Naturally, such a policy requires a degree of monetary autonomy that central banks have not had since the Bretton Woods system was in place, else rates be disrupted by international flows.
I think the evidence presented here is a blow to the ‘too low for too long‘ meme that pervades discussion of the crisis. There seems to be a belief that low rates are somehow ‘artificial’ (relative to what, exactly?) and we need to ‘get back to reality.’ In fact, it seems that ‘checking’ a bubble may both fuel speculation and needlessly invalidate potential investments, hence creating the situation that the central bank purportedly wanted to prevent.
It is my opinion that major areas of neoclassical economics rest on misinterpretations of original texts. Though new ideas are regularly recognised as important and incorporated into the mainstream framework, this framework is fairly rigid: models must be micro founded, agents must be optimising, and – particularly in the case of undergraduate economics – the model can be represented as two intersecting curves. The result is that the concepts that certain thinkers were trying to elucidate get taken out of context, contorted, and misunderstood. There are many instances of this, but I will illustrate the problem with three major examples: John Maynard Keynes, John Von Neumann and William Phillips.
Keynes, in two lines
It is common trope to suggest that John Hicks‘ IS/LM interpretation of Keynes’ General Theory was wrong. It is also true, and this was acknowledged by Hicks himself over 40 years after his original article.
IS/LM, or something like it, was being developed apart from Keynes by Dennis Robertson, Hicks and others during the 1920s/30s, who sought to understand interest rates and investment in terms of neoclassical equilibrium. Hence, Hicks tried to annex Keynes into this framework (they both, confusingly, called neoclassicals ‘classicals’). Keynes’ theory was reduced to two intersecting lines that looked a lot like demand-supply. The two schedules were derived from the equilibrium points of the demand and supply for money (LM), and the equilibrium points of the demand and supply for goods and services (IS). In order to reach ‘full employment’ equilibrium, the central bank could increase the money supply, or the government could expand fiscal policy. Unfortunately, such a glib interpretation of Keynes is flawed for a number of reasons:
First, Keynes did not believe that the central bank had control over the money supply:
…an investment decision (Prof. Ohlin’s investment ex-ante) may sometimes involve a temporary demand for money before it is carried out, quite distinct from the demand for active balances which will arise as a result of the investment activity whilst it is going on. This demand may arise in the following way.
Planned investment—i.e. investment ex-ante—may have to secure its ” financial provision ” before the investment takes place…There has, therefore, to be a technique to bridge this gap between the time when the decision to invest is taken and the time when the correlative investment and saving actually occur. This service may be provided either by the new issue market or by the banks;—which it is, makes no difference.
Since Hick’s model relies on a ‘loanable funds’ theory of money, where the interest rate equates savings with investment and the central bank controls the money supply, it clearly doesn’t apply in Keynes’ world. An attempt to apply endogenous money top IS/LM will result in absurdities: an increase in loan-financed investment, part of the IS curve, will create expansion in M, part of the LM curve. Likewise, M will adjust downwards as economic activity winds down. So the two curves cannot move independently, which violates a key assumption of this type of analysis.
Second, Keynes did not believe the interest rate had simple, linear effects on investment:
I see no reason to be in the slightest degree doubtful about the initiating causes of the slump….The leading characteristic was an extraordinary willingness to borrow money for the purposes of new real investment at very high rates of interest.
But over and above this it is an essential characteristic of the boom that investments which will in fact yield, say, 2 per cent. in conditions of full employment are made in the expectation of a yield of, say, 6 per cent., and are valued accordingly. When the disillusion comes, this expectation is replaced by a contrary “error of pessimism”, with the result that the investments, which would in fact yield 2 per cent. in conditions of full employment, are expected to yield less than nothing…
…A boom is a situation in which over-optimism triumphs over a rate of interest which, in a cooler light, would be seen to be excessive.
So, again, the simple, mechanistic adjustments in IS/LM are inaccurate. The magnitude of the interest rate will not change just the level, but also the type of investment taking place. Higher rates increases speculation and destabilise the economy, whereas low rates encourage real capital formation. This key link between bubbles, the financial sector and the real economy was lost in IS/LM, and also in neoclassical economics as a whole.
Third – and this is something I have spoken about before - Hicks glossed over Keynes’ use of the concept of irreducible uncertainty, which was key to his theory. The result was a contradiction, something Hicks noted in the aforementioned ‘explanation’ for IS/LM. The demand for money was, for Keynes, a direct result of uncertainty, and in a time period sufficient to produce uncertainty (such as Keynes’ suggested 1 year), expectations would be constantly shifting. Since both the demand for money, savings and investment depended on expectations, the curves would be moving interdependently, undermining the analysis. On the other hand, in a time period short enough to hold expectations ‘constant’ and hence avoid this (Hicks suggested a week), there would be no uncertainty, no liquidity preference and therefore no LM curve.
Hicks’ attempt to shoehorn Keynes’ book into his pre-constructed framework led to oversimplifications and a contradiction, and obscured one of Keynes’ key insights: that permanently low long term interest rates are required to achieve full employment. The result is that Keynes has been reduced to ‘stimulus,’ whether fiscal or monetary, in downturns, and the reasons for the success of his policies post-WW2 are forgotten.
Phillips and his curve
Another key aspect-along with IS/LM-of the post-WW2 ‘Keynesian’ synthesis was the ‘Phillips Curve,’ an inverse relationship between inflation and unemployment observed by Phillips in 1958. Neoclassical economists reduced this to the suggestion that there was a simple trade-off between inflation and unemployment, and policymakers could choose where to select on the Phillips Curve, depending on circumstances.
Predictably, this is not really what Phillips had in mind. What he observed was not ‘inflation and unemployment,’ but inflation and money wages. Furthermore, it was not a static trade off, but a dynamic process that occurred over the course of the business cycle. During the slump, society would observe high unemployment and low inflation; in the boom, low unemployment would accompany high inflation. This is why, if you look at the diagrams in his original paper, Phillips has numbered his points and joined them all together – he is interested in the time path of the economy, not just a simple mechanistic relationship. The basic correlation between wages and unemployment was just a starting point.
Contrary to what those who misinterpreted him believed, Phillips was not unaware of the influence of expectations and the trajectory of the economy on the variables he was discussing; in fact, it was an important pillar of his analysis:
There is also a clear tendency for the rate of change of money wage rates at any given level of unemployment to be above the average for that level of unemployment when unemployment is decreasing during the upswing of a trade cycle and to be below the average for that level of unemployment when unemployment is increasing during the downswing of a trade cycle…
…the rate of change of money wage rates can be explained by the level of unemployment and the rate of change of unemployment.
Finally, whatever Phillips’ theoretical conclusions, it is clear he did not intend even a correctly interpreted version of his work to be the foundation of macroeconomics:
These conclusions are of course tentative. There is need for much more detailed research into the relations between unemployment, wage rates, prices and productivity.
Had neoclassical economists interpreted Phillips correctly, they would have seen that he thought dynamics and expectations were important (he was, after all, an engineer), and we wouldn’t have been driven back to the stone age with the supposed ‘revolution‘ of the 1970s.
An irrational approach to Von Neumann
In microeconomics, the approach to ‘uncertainty’ (a misnomer) emphasise the trade-off between potential risks and their respective payoffs. Typically, you will see a graph that looks something like the following (if you aren’t a mathematician, don’t be put off – it’s just arithmetic):
The question is whether a company will invest at home or abroad. There is an election coming up, and one candidate (B) is an evil socialist who will raise taxes, while the other one (A) is a capitalist hero who will lower them. Hence, the payoffs for the investment will differ drastically based on which candidate wins. Abroad, however, there is no election, and the payoff is certain in either case; the outcome of the domestic election is irrelevant.
The neoclassical ‘expected utility’ approach is to multiply the relative payoffs by the respective probability of them happening, to get the ‘expected’ or ‘average’ payoff of each action. So you get:
For investing abroad: £200k, regardless
For investing at home: (0.6 x £300k) + (0.4 x £100k) = £220k
Note: I am assuming the utility is simply equal to the payoff for simplicity. Changing the function can change the decision rule but the same problem – that what is rational for repeated decisions can seem irrational for one – will still apply.
So investing at home is preferred. Supposedly, this is the ‘rational’ way of calculating such payoffs. But a quick glance will reveal this approach to be questionable at best. Would a company make a one off investment with such uncertain returns? How would they secure funding? Surely they’d put off the investment until the election, or go with the abroad option, which is far more reliable?
So what caused neoclassical economists to rely on this incorrect definition of ‘rationality’? A misinterpretation, of course! One need look no further than Von Neumann’s original writings to see that he only thought his analysis would apply to repeated experiments:
Probability has often been visualized as a subjective concept more or less in the nature of estimation. Since we propose to use it in constructing an individual, numerical estimation of utility, the above view of probability would not serve our purpose. The simplest procedure is, therefore, to insist upon the alternative, perfectly well founded interpretation of probability as frequency in long runs.
Such an approach makes sense – if the payoffs have time to average out, then an agent will choose one which is, on average, the best. But in the short term it is not a rational strategy: agents will look for certainty; minimise losses; discount probabilities that are too low, no matter how high the potential payoff. This is indeed the behaviour people demonstrate in experiments, the results of which neoclassical economists regard as ‘paradoxes.’ A correct understanding of probability reveals that they are anything but.
Getting it right
There are surely many more examples of misinterpretations leading to problems: Paul Krugman’s hatchet job on Hyman Minsky, which completely missed out endogenous money and hence the point, was a great example. The development economist Evsey Domar reportedly regretted creating his model, which was not supposed to be an explanation for long run growth but was used for it nonetheless. Similarly, Arthur Lewis lamented the misguided criticisms thrown at his model based on misreadings of misreadings, and naive attempts to emphasise the neoclassical section of his paper, which he deemed unimportant.
This is not to say we should blindly follow whatever a particularly great thinker had to say. However, indifference toward the ‘true message’ of someone’s work is bound to cause problems. By plucking various thinker’s concepts out of the air and fitting them together inside your own framework, you are bound to miss the point, or worse, contradict yourself. Often a particular thinker’s framework must be seen as a whole if one is truly to understand their perspective and its implications. Perhaps, had neoclassical economists been more careful about this, they wouldn’t have dropped key insights from the past.