Posts Tagged Criticisms of neoclassicism
Many economists will admit that their models are not, and do not resemble, the real world. Nevertheless, when pushed on this obvious problem, they will assert that reality behaves as if their theories are true. I’m not sure where this puts their theories in terms of falsifiability, but there you have it. The problem I want to highlight here is that, in many ways, the conditions in which economic assumptions are fulfilled are not interesting at all and therefore unworthy of study.
To illustrate this, consider Milton Friedman’s famous exposition of the as if argument. He used the analogy of a snooker player who does not know the geometry of the shots they make, but behaves in close approximation to how they would if they did make the appropriate calculations. We could therefore model the snooker player’s game by using such equations, even though this wouldn’t strictly describe the mechanics of the game.
There is an obvious problem with Friedman’s snooker player analogy: the only reason a snooker game is interesting (in the loosest sense of the word, to be sure) is that players play imperfectly. Were snooker players to calculate everything perfectly, there would be no game; the person who went first would pot every ball and win. Hence, the imperfections are what makes the game interesting, and we must examine the actual processes the player uses to make decisions if we want a realistic model of their play. Something similar could be said for social sciences. The only time someone’s – or society’s – behaviour is really interesting is when it is degenerative, self destructive, irrational. If everyone followed utility functions and maximised their happiness making perfectly fungible trade offs between options on which they had all available information, there would be no economic problem to speak of. The ‘deviations’ are in many ways what makes the study of economics worthwhile.
I am not the first person to recognise the flaw in Friedman’s snooker player analogy. Paul Krugman makes a similar argument in his book Peddling Prosperity. He argues that tiny deviations from rationality – say, a family not bothering to maximise their expenditure after a small tax cut because it’s not worth the time and effort – can lead to massive deviations from an economic theory. The aforementioned example completely invalidates Ricardian Equivalence. Similarly, within standard economic theory, downward wage stickiness opens up a role for monetary and fiscal policy where before there was none.
If such small ‘deviations’ from the ‘ideal’ create such significant effects, what is to be said of other, more significant ‘deviations’? Ones such as how the banking system works; how firms price; behavioural quirks; the fact that marginal products cannot be well-defined; the fact that capital can move across borders, etc etc. These completely undermine the theories upon which economists base their proclamations against the minimum wage, or for NGDP targeting, or for free trade. (Fun homework: match up the policy prescriptions mentioned with the relevant faulty assumptions).
I’ll grant that a lot of contemporary economics involves investigating areas where an assumption – rationality, perfect information, homogeneous agents - is violated. But usually this is only done one at a time, preserving the other assumptions. However, if almost every assumption is always violated, and if each violation has surprisingly large consequences, then practically any theory which retains any of the faulty assumptions will be wildly off track. Consequently, I would suggest that rather than modelling one ‘friction’ at a time, the ‘ideal’ should be dropped completely. Theories could be built from basic empirical observations instead of false assumptions.
I’m actually not entirely happy with this argument, because it implies that the economy would behave ‘well’ if everyone behaved according to economist’s ideals. All too often this can mean economists end up disparaging real people for not conforming to their theories, as Giles Saint-Paul did in his defence of economics post-crisis. The fact is that even if the world did behave according to the (impossible) neoclassical ‘ideal’, there would still be problems, such as business cycles, due to emergent properties of individually optimal behaviour. In any case, economists should be wary of the as if argument even without accepting my crazy heterodox position.
The fact is that reality doesn’t behave ‘as if’ it is economic theory. Reality behaves how reality behaves, and science is supposed to be geared toward modelling this as closely as possible. Insofar as we might rest on a counterfactual, it is only intended when we don’t know how the system actually works. Once we do know how the system works – and in economics, we do, as I outlined above – economists who resist altering their long-outdated heuristics risk avoiding important questions about the economy.
It doesn’t make any difference how beautiful the hypothesis (conclusion) is, how smart the author is, or what the author’s name is, if it disagrees with data or observations, it is wrong.
- Richard Feynmann
Our empirical criterion for a series of theories is that it should produce new facts. The idea of growth and the concept of empirical character are soldered into one.
- Imre Lakatos
A remarkable characteristic of economics is the sheer staying power of theories, even with a lack of empirical evidence to corroborate the propositions of these theories. In my experience, it is not uncommon for lecturers to remark that the lack of evidence for a theory has been a ‘problem’ for economists (though apparently not enough of a problem for them to throw out said theory). Often textbooks, lectures and discussions of theory make no reference to evidence whatsoever, and where they do it is trivial (for example, representative agent intertemporal macroeconomic theory predicts that governments will run periods of deficits followed by periods of surplus).
In the paragraphs that follow, I’ll examine a few cases of where I believe economics has gone off the mark in this respect. Specifically, I evaluate Marginal Productivity Theory, Walrasian Equilibrium, and The Solow Growth Model. I avoid theories such Real Business Cycle models and the Efficient Markets Hypothesis, partly because they have been done to death, but more importantly to demonstrate that the bad theories in economics are not merely the result of a few ‘wild cards’ at Chicago. On the contrary, I believe an anti-empirical approach is institutionalised within mainstream economics and that economics must undergo a paradigmatic shift to move away from these theories.
Marginal Productivity Theory (MPT)
The common interpretation of MPT is that it predicts workers will be paid ‘what they’re worth.’ In fact, this is not correct; the theory predicts that average productivity of workers will be positively related to wages, rather than each worker getting precisely their ‘just desserts.’ In any case, the result is that MPT predicts that compensation will increase as productivity increases. Hence, graphs such as this one – which you have likely seen before – pose a problem for MPT:
I have seen several responses to the problems presented by graphs like this. The first is that non-wage benefits have risen, which isn’t shown in this data. The second is that the adjustments for relative prices have been incorrectly applied, and consumers have more purchasing power than it first seems. However, estimates exist which take all of these things into account, and they still come to the same conclusions: most people’s overall real compensation is not increasing, even though their productivity is.
Another response would be that marginal productivity did well until the 70s, so maybe it remains useful. This is special pleading. A theory must be equipped to explain all phenomenon within its domain (in this case the labour market), rather than selectively applied where it suits the economist. If the laws of physics suddenly stopped working, can you imagine physicists making this defence? Saying ‘MPT will work except when it doesn’t and if it doesn’t we will throw our hands up in the air and carry on’ is not science. The fact is that such a sudden and clear decoupling of wages and productivity poses a clear problem for advocates of MPT, one which requires either a thorough explanation or discarding the theory altogether.
Walrasian equilibrium is one of the more absurd pieces of theory in economics (which is saying something). There are two (rational) agents with endowments of two factors of production, which they hire out to two profit-maximising producers. The producers use these factors of production to create two consumer goods, then the consumers purchase them. Everyone behaves as if they are perfectly competitive (they can’t influence prices) and everything happens simultaneously. There is no direct trade; instead individuals trade through the market (which comes from
god outside the model).
The behaviour of consumers in this model is tautological. They consume based on a predetermined utility function that cannot be observed. Hence, they consume what they were always going to consume based on the chosen, non-empirical parameters of the model. This doesn’t tell us anything.
The behaviour of producers in this model is observable in the real world and hence not tautological. It is also not what happens in the real world. Some firms maximise profits, but most don’t; those firms that do maximise profits equate MC and MR is clearly false.
The only prediction this model as a whole makes is that the initial distribution of endowments will affect what is produced, how it is distributed, how much is produced and the price of what is produced. In other words: the initial resource distribution of a market economy affects its subsequent workings. This is trivial, and easily shown by theories that are based on more realistic assumptions (such as Sraffa).
The Solow Growth Model
The Solow model, to me, seems to be a textbook case of ‘bad science.’ This is clear from the story of its development (a story anyone who has taken development or macroeconomics will know).*
The Solow model predicts that, due to diminishing returns to capital, developing countries will catch up with developed countries in terms of GDP. At a low level of capital stock, the potential returns to investment are high (e.g. irrigating/ploughing a previously unkempt field). As the stock of capital increases, the returns to investment decrease and the growth rate of a country balances out. Hence, all countries will converge to a similar long term growth rate.
That this prediction is false is no longer debated. In the 1980s, William Baumol provided evidence that seemed to support the hypothesis. This was quickly disputed by Brad Delong, who noted that Baumol had used a sampling bias – he only included countries which were developed, effectively assuming his conclusion. Delong included more countries and found no evidence of convergence.
However, economists weren’t ready to give up. The prediction of the Solow model was reframed as conditional convergence: that is, provided countries have the right institutions, social cohesion, etc. they will converge in terms of growth. This, to me, seems trivial. The entire point of development economics is that the conditions in poor countries are not conducive for them to develop and so catch up with the developed countries. The Solow model doesn’t ask how a country might achieve this, but only says that it is a necessary condition for development, something development economists have always known. Hence, the Solow model is irrelevant for the immediate problem of development economics, which is how exactly we can help poverty-stricken countries get off the ground.
Is Economics That Bad?
In the interests of balance, it is worth noting some predictions made in economics that have been either empirically verified or dropped subsequent to falsification. Quantity of money targeting was tried, and failed, in a few countries, which led to Milton Friedman himself repudiating it (though economists still erroneously use the same framework which led to it). The lifetime consumption hypothesis (and non-utility based consumer theory in general) display good empirical corroboration and have all the hallmarks of a ‘good‘ scientific approach. The Phillips Curve as used by economists was modified in light of evidence in the 1970s. Both the multiplier and the Giffen Good are good examples of non-trivial, clear, falsifiable predictions, though I will not comment on evidence for them because that would take a post for each one.
Nevertheless, the record as a whole is not good. Theories from over a century ago look, and are taught, the same way as they were when they were initially adopted. New ideas that are not even disputed by economists, such as behavioural economics, are slow to be adopted, and when they are adopted are presented as a ‘special case’ and in a way amenable to the core framework, which is, of course, still taught alongside them. As far as I’m aware, there is no clear cut case of a neoclassical theory being completely thrown out and never mentioned again. This alone should be an indicator that the scientific method is not at work in economics.
Commenter Dan thinks economics has not yet found its watershed moment:
Think about Biology before DNA was discovered or Geology before plate tectonics was understood, both disciplines had learned a lot but they still lacked a comprehensive model that made everything fit into place.
I am sympathetic with this viewpoint. Heterodox criticisms come at economists thick and fast – personally, I think most of these criticisms are valid and very little of neoclassical economics should be left. Yet neoclassical economics persists.
However, in my opinion this isn’t because economics lacks a unifying theory; it’s the exact opposite. Economists already think they have found a unifying concept: namely, the optimising agent. Consumers maximise utility; producers maximise profits; politicians maximise their own interests/their ability to get reelected. Sure, there are a few constraints on this behaviour, but overall it is the best starting point. It all blends together into a coherent theory that can tell a plausible story about the economy. I find economists are resistant to any theory that doesn’t follow this methodology.
The typical definition of economics is the study of how resources are allocated. Hence, a unifying theory should empirically and logically do a satisfactory job of explaining prices, production and distribution. Such a theory would be able to underlie virtually any economic model in some form, whether being the wider context of a microeconomic phenomenon, or the basis of macroeconomic phenomenon. No easy task, then, but luckily many approaches of this nature already exist.
Alternative Theories of Behaviour
If we want to stick with agent-based explanations of the economy, there are any number of alternatives to the ‘optimising’ agent. Among these are:
I consider all of these approaches useful, but none of them sufficient for the task at hand.
In the case of the first two, replacing ‘optimising’ agents with ‘satisficing’ agents isn’t exactly revolutionary. Maslow’s hierarchy can, in fact, work as a utility function. In both cases, we still run into similar problems of aggregation and of reductionism. And we end up trying to shoehorn every decision into a particular approach. The simple truth is that agents have a lot of different motivations for their actions and sometimes these aren’t always clear, even to them.
My main issue with these, and any agent based approach, is that they aren’t necessarily relevant for the wider question of resource allocation in society. Individualist-based neoclassical economics has to reduce things down to a few agents with only a few goods in order to have any conclusions whatsoever; I can’t help but feel similar problems would emerge here. Class struggle may determine distribution but it doesn’t tell us much about what is produced and at what price it is sold. In order to understand how production takes place and prices are determined, we will have to look elsewhere.
A Theory of Value
The value approach has a lot of pluses. A theory of value underpins the explanation of relative prices, and also has normative implications that recognize the inevitable value judgments in economics. The only problem I have here is that I’ve yet to find a convincing theory of value – the two most widely known are the neoclassical/Austrian subjective theory of value and the Labour Theory of Value (LTV).
I object to the idea that prices merely reflect subjective valuations for the basic reason of circularity: prices must be calculated before subjective valuation takes place, so they cannot purely reflect subjective values.
I have more sympathy with the LTV (mostly because its proponents seem to have coherent responses to every criticism thrown at it), but I remain unconvinced. The defences of the labour theory of value tend to rest on appeals to ‘the long run’ and ‘averages” of socially necessary labour time. These may be useful, but, like the neoclassical ‘long run’ approach they seem to leave open the immediate question of what’s going on in the economy and what we can do about it.
In my opinion, these approaches both contain some validity, and are not mutually exclusive. I tend to agree with Richard Wolff, who asserts that suggesting one has refuted the other is like saying knives & forks have refuted chopsticks. Both are useful; neither are all-encompassing theories. I also believe both are compatible, to some degree, with my favoured approach:
The ‘Reproduction and Surplus’ Theory
This approach is the one emphasised by Sraffians and Classical Economists. It starts from the basic observation that society must reproduce itself to survive, and that generally society manages this, plus a surplus. The reproductive approach emphasises what I believe to be an important aspect of capitalism, and perhaps all systems: the collective nature of production. Industries are interdependent; people work in teams; various institutions, often state-backed or provided, underlie all of this. Hence, no special moral status is accorded to prices or the allocation of surplus, except that prices must be appropriate for the continued existence of industries and society as a whole.
On first inspection the ‘insight’ that society must reproduce itself might be considered trivial, but following through its implications can yield interesting and useful conclusions. The framework can be used to determine prices technically, independently of either preferences or values. It emphasises the interdependent nature of the economy: if one industry or input fails, it has severe knock on effects. For this reason, it would do a great job of explaining both the oil shocks and resultant stagflation of the 1970s and the 2008 financial crisis, something modern macroeconomics cannot manage.
On top of this, the model is versatile: it can interact with its institutional environment, which determines key variables exogenously (e.g. the monetary system determines interest rates, political power determines distribution). The classical approach is, for example, compatible with class theories of income distribution, post-Keynesian theories of endogenous money and mark-up pricing, and even neoclassical utility maximising individuals! Probably the most promising and complete framework out of them all – I look forward to further developments of this approach.
It is feasible that the task of finding a watershed moment is not possible in the fuzzy world of social sciences. Psychology and sociology are both characterised by competing approaches; psychology in particular has improved since the
neoclassicals Freudians were dethroned. If neoclassical economics has taught us nothing else, it’s the importance of not being trapped by particular theories for want of elegance, which is why there is a lot to commend in the institutional school of economics.
Nevertheless, I think there is scope for exploring unifying principles. Progress in neurology may provide such a foundation for psychology; similarly, ideas such as societal reproduction could equally be applied to sociological concepts such as the role of beliefs, class, sports or what have you. As far as economics goes, such a substantial step forward could be what’s required to displace neoclassical economics, whose staying power, in my opinion, cannot be accorded to either its empirical relevance or its internal consistency. Perhaps neoclassical economics persists simply because its building blocks are so well defined that other approaches seem too incomplete to offer their opponents sure footing.
In mainstream economic models, consumer’s behaviour is generally assumed to follow a ‘utility function.‘ Consumers derive utility (creatively measured in ‘utils’) from whatever they consume, and they will attempt to maximise this subject to their budget constraint – and, perhaps, at a later level, some extra terms to incorporate behavioural quirks, social pressure or what have you. Unfortunately, even with modifications, the concept of utility is an explanation of behaviour that is questionable at best.
The first conundrum – as posed in the title of this post - is exactly what form utility takes. Is it supposed to be some sort of cumulative attribute that people collect as they go through life, like a stat on a video game? Or is it a temporary sensation experienced after consumption, so that economic agents are effectively utility junkies, chasing around temporary highs? There may be a case for regarding anyone who truly maximised utility as clinically insane and in need of help. In any case, thoughtlessly following predetermined utility functions leaves neoclassical agents with no real room for ‘choice’ – we know what their behaviour will be in advance, and it is unchangeable.
There is also the problem of fungibility: is it fair to suggest that joining a gym gives someone the same kind of satisfaction as eating a donut? Or that eating a donut gives the same feeling as owning a car? These nuances are lost in the aggregated world of ‘utils,’ a unit which has no relation to anything else and hence is hard to verify – at its worst, utility is simply circular: only measurable by the same behaviour it supposedly explains.
Economists have a standard response to contentions that utility is unrealistic. They will assert that, even though utility doesn’t really ‘exist’ – a position few would endorse, surely – it still follows that if preferences follow economist’s axioms, then an effective utility function can be derived. That is: utility is not meant to be taken literary, but economist’s assumptions are sufficient to ensure a relationship between preferences that is functionally the same thing. So it would appear the only way out for opponents of utility is to critique the axioms. I don’t believe this is true, but the axioms are worth critiquing before I explain why.
The two most important axioms required to derive a basic utility function are completeness and transitivity. There are other axioms that are also commonly used – independence, non-satiation, convexity - which are all vulnerable to criticisms, but since they pertain to the the exact form of a utility function, rather than the concept as a whole, I will focus only on completeness and transitivity. Without these, there is no utility function, whichever way you paint it.
The first axiom – completeness – is the idea that all relevant decisions can be definitively compared to one another: that is, there is no room for ‘I don’t know.’ There are clear problems with this. Often, it is hard to choose between two options, particularly if one is a bundle of many goods (e.g. two shopping baskets). In fact, as a decision rule this is generally computationally impossible. So people may act based on chance or impulse; they may seek advice or ask someone else to make the decision for them. What’s more, often people find it difficult to evaluate choices even after they’ve made them. Sometimes there is no ‘correct’ choice!
The other axiom, transitivity, implies that people will be consistent in their ordering of preferences. If I prefer A to B, and B to C, I will prefer A to C. It is an important axiom because, even if preferences are complete, a violation of transitivity means that utility functions can basically have any shape and therefore be pretty useless for clear calculations. While I expect numerous behavioural quirks suggest transitivity may be violated under certain circumstances, overall it is a fair axiom – for the individual. However, it has been known for some time that, once we have more than two agents, it becomes impossible to establish a clear, consistent ordering of preferences for the group. This isn’t moving the goalposts: it is highly relevant when we are using representative agents for the entire economy. (This problem also applies to the aforementioned independence axiom).
My most important point, though, is that even if preferences do follow all the axioms, utility is still highly flawed. This is because, like so many neoclassical models, all utility functions give us is a static snapshot of the economy (or individual) at a particular point in time, and there is no room for change. The simple fact that preferences are highly volatile and will be different in the morning and the evening, or in summer and winter, is enough to render utility useless for practical questions about the economy, which must surely incorporate time. Similarly, preference reversal has shown that the way options are presented has a large impact on the choice made by somebody, suggesting again that underlying ‘preferences’ are highly subject to change, and not really useful for the practical purpose of predicting behaviour. One can only wonder how utility might deal with a theory such as multiple selves, which would surely create the aforementioned aggregation problems for preferences, but for one person!
Now, I can almost hear the cries of “ah, but what is your alternative?” Actually, that doesn’t matter for the immediate critique. If I have a map of London Underground and I’m in New York, I’m not going to use it (even less so if I have a map of a fantasy land that exists only in the minds of economists). To push the analogy a little further, it is worth asking what I would do in this situation. I can think of two possibilities: either ask for help, or follow some simple rules of thumb based on what knowledge I have. This is the strategy economists should adopt.
In the case of ‘asking for help,’ what I mean is that economists should turn to other social sciences; namely, psychology, which has a far more empirically driven methodology than economics and has numerous explanations of behaviour. Economists truly interested in understanding human behaviour – rather than preserving their favoured assumptions – should collaborate with psychologists to create sound behavioural foundations.
Until then, economists should be content with simple empirically observed rules of thumb and intuitive aggregate relationships (they already do this with the marginal propensity to consume). Objections of ‘but Lucas Critique‘ are special pleading, since preferences are also liable to change with political decisions. In fact, I’d shout ‘Lucas Critique’ right back at economists, and suggest that they spend less time on the impossible task of making their models ‘immune’ to the Lucas Critique, while spending more evaluating the ever-changing relationship between policy and observation. It is better for economists to be vaguely right than precisely wrong.
Out of all the concepts in neoclassical economics, none is more imaginary, absurd and empirically falsified than utility. Economists supposedly follow a methodology of strict positivism, and based on the experimental evidence against utility, there is surely no reason to keep it. Yet for some reason, it doesn’t seem to attract the same level of criticism as other areas of neoclassical economics. Personally, I am puzzled as to why.
Neoclassical (and Austrian) economics as a whole tend to emphasise market forces as the dominant determinant of employment, distribution and output in the economy. In the neoclassical theory of the firm, firms are something of a ‘black box,’ inside which uniform inputs are uniformly processed into uniform outputs. The firm can be thought of as an agent – or collection of agents – maximising some goal subject to resource constraints. The most commonly used version of this is the perfectly competitive firm, which treats prices as a given, has the single aim of maximising profits and makes ‘normal’ profits. However, even more elaborate theories of the firm – such as ones where managers have objectives that conflict with those of shareholders – retain the standard assumption that the exhibited behaviour of the firm can be deduced from the behaviour of optimising agents inside it. Firms are rarely assumed to have internal differences in production techniques. Instead, they simply serve as channels, coordinated by supply and demand, through which resources are allocated.
But there is good reason to believe producers, rather than the impartial ‘laws’ of demand and supply, are the dominant force in an economy. It is a stretch to suggest that products are merely the expression of consumer preferences; after all, consumers rarely have input directly into the production process. Products are created by a firm and the consumers role is passive in that they can only choose whether or not to reject it. There also exists a power asymmetry between producers and consumers (and workers): since producers are the ones who own the products, they can ‘hold out’ for longer than those without. A capitalist alone can subsist; one who is merely a worker and/or consumer relies on the capitalist(s) for employment, goods and services.
In neoclassical theory, any deviations from perfect competition, and even the mere existence of firms, is thought to be either a source of inefficiencies, a result of them, or both. Hence, an ‘ideal,’ Pareto Efficient economy is thought to be one of perfectly competitive, tiny firms, which have no individual impact on the market in which they are situated. Any questions asked about firms proceed from the premise of which ‘frictions’ we can blame for the observed real world deviation from this ideal.
Several questions about how firms work and their role in the economy are never asked in neoclassical economics. First, what really goes on inside the firm: how are organisation and management used to impact the ability of the firm to convert inputs into outputs? Second, what is the nature of ‘market power?’ Could it be that some industries are so characterised by ‘market power’ that it no longer makes sense to talk about ‘the market’ as a meaningful concept? Third, to what extend do ‘imperfections’ such as these – organisation, market power, scale – actually create beneficial effects that we would not observe in the world of perfect competition?
I believe that, under contemporary capitalism, firms have such an impact that it makes more sense to use ‘the firm’ as an epistemological starting point than ‘the market.’ I also believe that, at least from material point of view, large firms are probably a superior system to one resembling the perfectly competitive ’ideal.’
The competitive ideal seems illogical when applied to the real world. Market forces can be inherently uncertain and costly to adjust to. Any firm which is wholly subservient to market forces, and hence has no control over its future, is simply a terrible firm, and a poor prospect for any potential investor, shareholder or worker. Even consumers prefer an established brand they can trust, at least in the absence of regulation. Hence, no firm would go to an investor, shareholder or bank and say “I have a product, let’s see whether the market likes it or not;” what is expected is a clear strategy.
It would, of course, be wrong to suggest that firms are not under threat from the development on new technologies and from the demands of consumers. Even well established companies go bankrupt from time to time. Nevertheless, many firms persist for a long time, either because their position is that strong or because they insure themselves against market forces. Research and development can ensure a firm always has something new to offer and can adapt, should demand for a product fall or a new release flop. Horizontal integration - selling different products, perhaps in entirely different markets – can broaden a firm’s consumer base (Google’s massive diversification over the last decade is an example of this). Brand proliferation – the same firm creating multiple brands – can serve a similar purpose (think of the different cereals produced by Kellogg).
The challenge for a firm is to establish a degree of control over its respective market; the degree to which it manages this will be a determinant of its success; its ‘competitive advantage.’ Hence, many of a firm’s actions have the purpose of cementing that firm’s position in the marketplace, rather than simply responding passively to outside market forces. Numerous behaviours exhibited by firms support this idea:
- Some firms seek market share as opposed to profits: they want to make sure they have sufficient control over their industry.
- Prices don’t change constantly depending on the state of the economy; firms keep them the same for long periods of time to save money when performing calculations and to be able to produce projections.
- Branding, advertising, marketing and various offers are used to gain and retain customers so that the firm has at least a minimum flow of demand it can rely on. The mantra that it is far more costly to acquire new customers than retain old ones is well known; hence, firms try to make sure they are as unaffected by the whims of consumers as possible.
- Firms control supply through deals, perhaps exclusive, with suppliers; better yet, they can establish control of the supply chain themselves (vertical integration).
Firms also need to establish control over the labour market, as they often rely on the commitment of workers to a specific position in their organisation. The fact that knowledge and skills are often organisation-specific makes the cost of leaving – for both employer and employee – higher, and this effect becomes more amplified as one moves up the hierarchy of the organisation. The result is that the cost of even one worker leaving are often estimated to be well above their salary. It is no use starting a project if you know that, half way through, your manager – with his unique knowledge of what is going on – will just leave and work elsewhere. So firms retain workers with promises of career progression and rewards, as well as establishing a psychological commitment to their organisation. The most extreme example of this is the Japanese ‘employment for life‘ approach, which has proved to be remarkably competitive; moreso than many of its western counterparts.
The existence of long lived companies with significant influence over forces supposedly determined by ‘the market’ creates another problem for the state-market dichotomy. Many companies are economically bigger than countries or state and local governments, and hence their decisions have considerable political implications. A large company setting up shop in a small town, or even small country can significantly alter the landscape there.
Unfortunately, many supporter of “free markets” are driven to defending the actions of large, centralised entities as apolitical. This perspective is based on the false premise that they are simple conduits for scarcity and consumer preferences, rather than actively determining and influencing these things. It is clear this influence has largely driven us away from what economists typically mean when they speak of a ‘market.’ The implication is that, whatever you want to call the current system, it is vital that the entities which characterise it, with their significant impact on production, distribution and exchange, should be put into the political spotlight.
I recently stumbled upon a reddit post called ‘A collection of links every critic of economics should read.‘ One of the weaker links is a defence of economists post-crisis by Gilles Saint-Paul. It doesn’t argue that economists actually did a good job foreseeing the crisis; nor does it argue they have made substantial changes since the crisis. It argues that the crisis is irrelevant. It is, frankly, an exercise in confirmation bias and special pleading, and must be fisked in the name of all that is good and holy.
Saint-Paul starts by exploring the purpose of economists:
If they are academics, they are supposed to move the frontier of research by providing new theories, methodologies, and empirical findings.
Yes, all in the name of explaining what is happening in the real world! If economists claim their discipline is anything more than collective mathematical navel gazing, then their models must have real world corroboration. If this is not yet the case, then progress should be in that direction. Saint-Paul is apparently happy with a situation where economists devise new theories and all nod and stroke their beards, in complete isolation from the real world.
If [economists] work for a public administration, they will quite often evaluate policies.
Hopefully ones that prevent or cushion financial crises, surely? Wait – apparently this is not a major consideration:
One might think that since economists did not forecast the crisis, they are useless. It would be equally ridiculous to say that doctors were useless since they did not forecast AIDS or mad cow disease.
AIDs and mad cow disease were random mutations of existing diseases and so could not have been foreseen. Financial crises are repeated and have occurred throughout history. They demonstrate clear, repeated patterns: debt build ups; asset inflation; slow recoveries. Yet despite this, doctors have made more progress on AIDs and MCD in a few decades than economists have on financial crises in a few centuries. It was worrying enough that DSGE models were unable to model the Great Depression, but given that ‘it’ has now happened again, under very similar circumstances, you’d think that alarm bells might be going off inside the discipline.
Saint-Paul now starts to defend economics at its most absurd:
One example of a consistent theory is the Black-Scholes option pricing model. Upon its introduction, the theory was adopted by market participants to price options, and thus became a correct model of pricing precisely because people knew it.
Similarly, any macroeconomic theory that, in the midst of the housing bubble, would have predicted a financial crisis two years ahead with certainty would have triggered, by virtue of speculation, an immediate stock market crash and a spiral of de-leveraging and de-intermediation which would have depressed investment and consumption. In other words, the crisis would have happened immediately, not in two years, thus invalidating the theory.
‘A crisis will happen if these steps are not taken to prevent it’ is not the same as ‘Lehman Brothers will collapse for certain on September 15th, 2008.’ Saint-Paul confuses different levels, and types of, prediction. Nobody is suggesting economists should give us a precise date. What people are suggesting is that, by now, economists should know the key causal factors of financial crises and give advice on how to prevent them.
Saint-Paul charges critics with:
…[ignoring] that economics is a science that interacts with the object it is studying.
How he thinks this is beyond me, seeing as the whole criticism is that policies designed by economists had a hand in causing the crash. Predictably, he goes on to state a ‘hard’ version of the Lucas Critique, the go-to argument for economists defending their microfoundations:
Economic knowledge is diffused throughout society and eventually affects the behaviour of economic agents. This in turn alters the working of the economy. Therefore, a model can only be correct if it is consistent with its own feedback effect on how the economy works. An economic theory that does not pass this test may work for a while, but it will turn out to be incorrect as soon as it is widely believed and implemented in the actual plans of firms and consumers. Paradoxically, the only chance for such a theory to be correct is for most people to ignore it.
It is reasonable to suggest policy will have some impact on the behaviour of economic agents. It is absurd to suggest this will always have the effect of rendering the policy (model) useless (irrelevant). It is even more absurd to suggest that we can ever design a model that sidesteps this problem completely. What we have is a continually changing relationship between policy and economic behaviour, and this must be taken into account when designing policy. This doesn’t imply we should fall back on economist’s preferred methods, despite a clear empirical failure.
Saint-Paul moves on – now, apparently, the problem is not that economist’s theories don’t behave like reality, but that reality doesn’t behave like economist’s theories:
In other words, if market participants had been more literate in, or more trustful of economics, the asset bubbles and the crisis might have been avoided.
If only everyone believed, then everything would be fine! Obviously, the simple counterpart to this is that many investors and banks did believe in the EMH or some variant of it, yet, as always, reality had the final say, as happened with the aforementioned Black-Scholes equation.
Saint-Paul now attempts to play the ‘get out of reality completely’ card:
While it is valuable to understand how the economy actually works, it is also valuable to understand how it would behave in an equilibrium situation where the agents’ knowledge of the right model of the economy is consistent with that model, which is what we call a “rational expectations equilibrium”. Just because such equilibria do not describe past data well does not mean they are useless abstraction. Their descriptive failure tells us something about the economy being in an unstable regime, and their predictions tell is something about what a stable regime looks like.
Basically, Saint-Paul is arguing that economic models should be unfalsifiable. Since we can hazard a guess that he isn’t too bothered about unrealistic assumptions, given the models he is defending, and since he clearly doesn’t care about predictions either, he has successfully jumped the shark. Economists want to be left alone to build their models which posit conditions which are never fulfilled in the real world, and that’s final!
As if this wasn’t enough, he proceeds to castigate the idea that economists should even attempt to expand their horizons:
The problem with the “broad picture” approach, regardless of the intellectual quality of those contributions, is that it mostly rests on unproven claims and mechanisms. And in many cases, one is merely speculating that this or that could happen, without even offering a detailed causal chain of events that would rigorously convince the reader that this is an actual possibility.
Note what Saint-Paul means by “detailed causal chain of events.” He means microfoundations. But he is not concerned about whether these microfoundations actually resemble real world mechanics, only that whether they are a “possibility.” To him, the mere validity of an economic argument means that it has been ‘proven,’ regardless of its soundness. In other words: economists shouldn’t be approximately right, but precisely wrong.
Saint-Paul concludes by rejecting the idea that financial crises can be modeled and foreseen:
This presumption may be proven wrong, but to my knowledge proponents of alternative approaches have not yet succeeded in offering us an operational framework with a stronger predictive power.
I hope – and actually believe – that most economists don’t believe that the crisis is irrelevant for their discipline. I’m sure few would endorse the caricature of a view presented here by Saint-Paul. Nevertheless, it is common for economists to suggest that the crisis was unforeseeable: a rare event that cannot be modeled because the economy is too ‘complex.’ This must be combated. Financial crises are actually (unfortunately) relatively frequent occurrences with clear, discernible patterns drawing them together. To paraphrase Hyman Minsky: a macroeconomic model must necessarily be able to find itself in financial crisis, otherwise it is not a model of the real world.
I have previously discussed Milton Friedman’s infamous 1953 essay, ‘The Methodology of Positive Economics.’ The basic argument of Friedman’s essay is the unrealism of a theory’s assumptions should not matter; what matters are the predictions made by the theory. A truly realistic economic theory would have to incorporate so many aspects of humanity that it would be impractical or computationally impossible to do so. Hence, we must make simplifications, and cross check the models against the evidence to see if we are close enough to the truth. The internal details of the models, as long as they are consistent, are of little importance.
The essay, or some variant of it, is a fallback for economists when questioned about the assumptions of their models. Even though most economists would not endorse a strong interpretation of Friedman’s essay, I often come across the defence ’it’s just an abstraction, all models are wrong’ if I question, say, perfect competition, utility, or equilibrium. I summarise the arguments against Friedman’s position below.
The first problem with Friedman’s stance is that it requires a rigorous, empirically driven methodology that is willing to abandon theories as soon as they are shown to be inaccurate enough. Is this really possible in economics? I recall that, during an engineering class, my lecturer introduced us to the ‘perfect gas.’ He said it was unrealistic but showed us that it gave results accurate to 3 or 4 decimal places. Is anyone aware of econometrics papers which offer this degree of certainty and accuracy? In my opinion, the fundamental lack of accuracy inherent in social science shows that economists should be more concerned about what is actually going on inside their theories, since they are less liable to spot mistakes through pure prediction. Even if we are willing to tolerate a higher margin of error in economics, results are always contested and you can find papers claiming each issue either way.
The second problem with a ‘pure prediction’ approach to modelling is that, at any time, different theories or systems might exhibit the same behaviour, despite different underlying mechanics. That is: two different models might make the same predictions, and Friedman’s methodology has no way of dealing with this.
There are two obvious examples of this in economics. The first is the DSGE models used by central banks and economists during the ‘Great Moderation,’ which predicted the stable behaviour exhibited by the economy. However, Steve Keen’s Minsky Model also exhibits relative stability for a period, before being followed by a crash. Before the crash took place, there would have been no way of knowing which model was correct, except by looking at internal mechanics.
Another example is the Efficient Market Hypothesis. This predicts that it is hard to ‘beat the market’ – a prediction that, due to its obvious truth, partially explains the theory’s staying power. However, other theories also predict that the market will be hard to beat, either for different reasons or a combination of reasons, including some similar to those in the EMH. Again, we must do something that is anathema to Friedman: look at what is going on under the bonnet to understand which theory is correct.
The third problem is the one I initially honed in on: the vagueness of Friedman’s definition of ‘assumptions,’ and how this compares to those used in science. This found its best elucidation with the philosopher Alan Musgrave. Musgrave argued that assumptions have clear-if unspoken-definitions within science. There are negligibility assumptions, which eliminate a known variable(s) (a closed economy is a good example, because it eliminates imports/exports and capital flows). There are domain assumptions, for which the theory is only true as long as the assumption holds (oligopoly theory is only true for oligopolies).
There are then heuristic assumptions, which can be something of a ‘fudge;’ a counterfactual model of the system (firms equating MC to MR is a good example of this). However, these are often used for pedagogical purposes and dropped before too long. Insofar as they remain, they require rigorous empirical testing, which I have not seen for the MC=MR explanation of firms. Furthermore, heuristic assumptions are only used if internal mechanics cannot be identified or modeled. In the case of firms, we do know how most firms price, and it is easy to model.
The fourth problem is related to above: Friedman is misunderstanding the purpose of science. The task of science is not merely to create a ‘black box’ that gives rise to a set of predictions, but to explain phenomena: how they arise; what role each component of a system fills; how these components interact with each other. The system is always under ongoing investigation, because we always want to know what is going on under the bonnet. Whatever the efficacy of their predictions, theories are only as good as their assumptions, and relaxing an assumption is always a positive step.
Consider the following theory’s superb record for prediction about when water will freeze or boil. The theory postulates that water behaves as if there were a water devil who gets angry at 32 degrees and 212 degrees Fahrenheit and alters the chemical state accordingly to ice or to steam. In a superficial sense, the water-devil theory is successful for the immediate problem at hand. But the molecular insight that water is comprised of two molecules of hydrogen and one molecule of oxygen not only led to predictive success, but also led to “better problems” (i.e., the growth of modern chemistry).
If economists want to offer lucid explanations of the economy, they are heading down the wrong path (in fact this is something employers have complained about with economics graduates: lost in theory, little to no practical knowledge).
The fifth problem is one that is specific to social sciences, one that I touched on recently: different institutional contexts can mean economies behave differently. Without an understanding of this context, and whether it matches up with the mechanics of our models, we cannot know if the model applies or not. Just because a model has proven useful in one situation or location, it doesn’t guarantee that it will useful elsewhere, as institutional differences might render it obsolete.
The final problem, less general but important, is that certain assumptions can preclude the study of certain areas. If I suggested a model of planetary collision that had one planet, you would rightly reject the model outright. Similarly, in a world with perfect information, the function of many services that rely on knowledge-data entry, lawyers and financial advisors, for example-is nullified. There is actually good reason to believe a frictionless world such as the one at the core of neoclassicism leaves the role of many firms and entrepreneurs obsolete. Hence, we must be careful about the possibility of certain assumptions invalidating the area we are studying.
In my opinion, Friedman’s essay is incoherent even on its own terms. He does not define the word ‘assumption,’ and nor does he define the word ‘prediction.’ The incoherence of the essay can be seen in Friedman’s own examples of marginalist theories of the firm. Friedman uses his new found, supposedly evidence-driven methodology as grounds for rejecting early evidence against these theories. He is able to do this because he has not defined ‘prediction,’ and so can use it in whatever way suits his preordained conclusions. But Friedman does not even offer any testable predictions for marginalist theories of the firm. In fact, he doesn’t offer any testable predictions at all.
Friedman’s essay has economists occupying a strange methodological purgatory, where they seem unreceptive to both internal critiques of their theories, and their testable predictions. This follows directly from Friedman’s ambiguous position. My position, on the other hand, is that the use and abuse of assumptions is always something of a judgment call. Part of learning how to develop, inform and reject theories is having an eye for when your model, or another’s, has done the scientific equivalent of jumping the shark. Obviously, I believe this is the case with large areas of economics, but discussing that is beyond the scope of this post. Ultimately, economists have to change their stance on assumptions if heterodox schools have any chance of persuading them.
Recently, I’ve been reading a lot from the school of institutional economics. Consequently, I have noticed another problem with the way economists approach theory and evidence: the lack of institutional considerations. This can blind economists to the fact that they may be studying entirely different phenomenon due to differences between countries, periods of history, companies, genders, cultures and much more.
The standard procedure of economists is to derive a model rigorously, based on a set of assumptions or axioms. Economists, unlike physicists, cannot perform controlled experiments in order to verify these models. Instead, empirical corroboration entails the use of econometrics to verify predictions. Economists must rely on collections of data, sometimes from disparate sources, and try to ‘correct’ these collections of data for said disparities. Economists then perform regressions in an attempt to isolate the relationship between two variables, and cautiously interpret the results. As explained more fully in the paragraphs below, the problem with this approach is that institutional differences could mean that some of the data collections are simply irrelevant, whether or not they disagree with the predictions of the theory in question.
Problems with this Methodology
It appears that underlying this methodology used by economists to evaluate and analyze collections of data is a search for unifying principles that can be applied to all economies across space and time. The economic models of both neoclassical and heterodox schools reflect evidence a discipline aiming to isolate the true mechanics of the economy and build a model around it. The mentality often seems to be that, if only we could isolate the true mechanics of the economy, we’d be able to understand the economy and make informed policy decisions based on our ideal framework.
I expect many economists would probably agree that the institutional, legal, and cultural contexts are not the same for all economies. However, many economic models and the economist’s rhetoric reflect a discipline looking to uncover an equivalent of physical laws. Indeed, Larry Summers went so far as to claim that “the laws of economics are like the laws of engineering. One set of laws works everywhere.”
Even though most rational minds would disagree with Larry Summers, I find there is a tendency among economists to imagine that the institutional, legal, and cultural contexts are viewed as ‘constraints’ against which the ‘underlying mechanics’ of the economy are continually pushing. However, there is good reason to believe that the ‘real’ mechanics of the economy are determined by the context in which the economy operates, rather than said context merely influencing the economy exogenously. Here are some historic and contemporary examples to illustrate my point.
Industrialisation: the US versus England
English firms were fairly small during the industrial revolution. For reasons beyond the scope of this blog post, firms typically took it upon themselves to educate and train new employees on the job. Such a system diminishes the need for state education, at least from a labour market standpoint, and it wasn’t until the late 19th century that public education was finally established, by which time England was industrialised and the old system was becoming obsolete. In contrast, the USA followed a different path. During the growth period of the US, firms generally emphasised large production lines, and had a more ‘flexible’ approach to employment. Such an approach required that firms could rely on the competence of the average worker, and over the course of the US industrial revolution state education increased substantially, reaching something approximating a fully public system at around the same time as England, even though England was much later in its development phase. Both strategies successfully industrialised their countries; both presented different needs from a policy perspective. But using a single model to inform policy in these two countries would clearly be a mistake.
A similar contrast can be seen with Denmark and Japan. Historically, Japan has had a policy of lifelong employment, which means a majority of workers are, well, employed for life (the model may be waning due to the effects of the lost decade, but it was robust during Japan’s impressive industrialisation period). What would be the effect of restrictions on hiring and firing with such a model? It’s highly unlikely there would be much effect; in fact, the model itself is partly based on such regulations. But what if similar restrictions were applied to Denmark’s dynamic ‘flexicurity‘ model, in which hiring and firing is incredibly easy but there are strong social safety nets? I expect it would cause a lot of problems for employers and employees alike, as Danish firm’s strategies are built around being able to gain and shed workers quickly. On top of that, the safety net makes workers more willing to accept such treatment, as well as having obvious humanitarian attractions.
Again, though these two models are different – almost diametrically opposed, in fact – both have coped with recessions relatively well (in terms of unemployment). The countries simply have different institutions that operate under different mechanics, and no model could capture both (feel free to read that as a challenge). Despite this, Japan has recently enacted some ‘neoliberal’ reforms, perhaps based on the mistaken belief that they need to ‘free up’ the ‘underlying’ mechanics of the economy. Time will tell whether or not this was a smart move.
The Scandinavian Ideal
Apart from labour markets, there is another good example of interdependent institutions, laws and culture: the oft-cited Sweden. Both free marketeers and leftists like to hold Sweden up as an example of their ideas in action. “Look at the vast redistribution, unions and public goods!” Is the cry of the leftists. Meanwhile, the rightists will assert that beneath such institutions lies a relatively light touch, ‘neoliberal’ regulatory structure. In any many ways both are right; but in many more ways they are both wrong. Both approaches take the economy of Sweden and suggest that due to X, Y or Z policy, it is the way to go. But neither appreciate how the institutions identified by both fit together.
Sweden is historically a high-trust society and as such regulation is relatively simple. Even contract law is far less complex than that you will find in the UK or the States. Many businesses do something akin to ‘self regulation,’ reporting their own data to government agencies. Similarly, while it is questionable whether the generous welfare state is a cause of the trust, it is not unreasonable to suggest that the two are complementary. Furthermore, as in the case of Denmark, generous safety nets go well with light regulation in terms of dynamism. The approach has serious attractions, but only if the two institutions are combined: furthermore, it may well be the case that trust is a necessary condition for both of these institutions in the first place. Once more it is clear that certain historical circumstances have given rise to a specific set of ‘optimal’ policies that could not be applied elsewhere.
So if we take data points from between such disparate countries, is it really meaningful to try and ‘adjust’ them for this type of difference? What we are studying are economies with very different underlying mechanics. To aggregate over them and take the average result is to reduce the data to meaninglessness. What is needed is a historical, institutional perspective that understands how different aspects of the economy fit together, and how the economy fits into the background of politics, history, culture (not to mention to environment – for example, on an island country, even a corner shop can be a monopoly).
What is best for an economy will depend on initial conditions and current institutions. These institutions are not ‘artificial’ impositions on the underlying economy; they are inevitable political decisions which have been born out of specific historical context, and hopefully fit the culture of the nation in question. It would be at best costly and destructive, and at worst basically impossible, to uproot these institutions in search of some ideal. As such, any discussion of economic policy must proceed based on acknowledgment of the mechanics created by different institutions.
Much of what I’m saying isn’t new at all. In fairness, most empirical economic papers are careful about announcing they have found surefire causal links. And there might be new techniques in econometrics that attempt to deal with the problems in the methodology I outlined above. Furthermore, I am not suggesting economists are not at all concerned with institutions or history: development economists and Industrial Organisation economists speak of them frequently. Nevertheless, I believe the institutional considerations I described above create a clear methodological problem for large amount of economic theory, particularly macro.
This is because institutional considerations are a good reason that social scientists should be even more concerned about assumptions and real world mechanics than the physical sciences, and therefore that economists should be highly concerned with the historical, institutional and legal context of the economies they are studying. Such considerations are another nail in the coffin of Milton Friedman’s methodology, which posits that abstract models based on “unrealistic” assumptions are the appropriate approach to economic theory. Such an approach cannot even begin to comprehend institutional differences, and as such, applying any one theory – or group of theories – to every economy is bound to cause problems.
Naturally, mainstream economists have been critical of Steve Keen’s Debunking Economics. I will do a brief series within a series to try and respond to some of these criticisms. In this part, I will respond to some of the main critiques of neoclassical theory that have generated controversy: demand curves, supply curves and the Cambridge Capital Controversies. In the next post, I will respond to criticisms of Keen’s own models and his take on the LTV, as well as anything else that has attracted criticism.
Note that this post will assume prior knowledge of Keen’s arguments, so if you haven’t yet read my summaries above (or better still, Keen’s book), then do it now.
It seems there are some problems in this chapter. Keen mixes up some concepts and misquotes Mas-Collel. Having said that, he is broadly right. This is frustrating for someone on his ‘side,’ because it means mainstream economists can dismiss him when they shouldn’t.
Keen presents a quote from Mas-Colell where he assumes a benevolent dictator redistributes income prior to trade, and asserts that this assumption serves to ensure market demand curves have the same properties as individual ones. In fact, Mas-Colell is using this assumption to ensure that a welfare function, not a price relationship, will be satisfied. It remains true that a PHD textbook still assumes a benevolent dictator redistributes resources prior to trade, and subsequent economists have also used this assumption, which is not a great indicator of the state of economics. However, it was not an assumption used to overcome the Sonnenschein-Mantel-Debreu conditions.
More importantly (wonkish paragraph), it seems Keen lost some nuance in the translation of his critique to layman’s terms. He spends a lot of time talking about the Gorman polar form. This is about the existence of a representative consumer for a set of indirect utility functions (‘indirect’ because it calculates utility without using the quantities of goods consumed), but Keen makes out it is about the aggregation of preferences required for demand curves. Gorman is in many ways similar to, but not relevant to, the discussion of the aggregation of demand curves. Keen also argues that consumers having identical preferences is the same as them being one consumer, but this needn’t be the case: just because you and I have the same preferences, doesn’t make us the same person
Despite this, the competing wealth and substitution effects do create the conditions described by Keen. However, they only apply under general equilibrium – under which wealth effects are present – and not partial equilibrium – under which they are assumed away. Keen does not distinguish between the two.
In summary, Keen is correct that neoclassical economists could not rigorously ‘prove’ the existence of downward sloping demand curves. Keen himself says that it is reasonable to assume that demand will go down as price does, and classical economists were also content with this an observed empirical reality. Neoclassical economists themselves ended up having to defer to empirical reality when faced with the SMD conundrum and thus they had gained no insight beyond the classical economists, except to prove that their preferred technique – reductionism – does not work. For this reason, I interpret the SMD conditions primarily as a demonstration of the limits of reductionism (though some fellow heterodox economists might disagree).
The proposition here is pretty simple: a participant in perfect competition will have a tiny effect on price. This is small enough to ignore at the level of the individual firm, which is neoclassical economist’s main defence. However they ignore that, as Keen says, the difference is both “subtle and the size of an elephant.” Once you aggregate up a group of infinitesimally small firms making incredibly small deviations from maximising profits, you get a result that is far away from the one given by the neoclassical formula. Result? We must know the nature of the MC, MR and demand curves to know both price and quantity, just as with a monopoly. The neoclassical theory – at this level – has no reason to prefer perfect competition to a monopoly, and a supply curve cannot be derived. From what I’ve seen, the critics ignore the effect of adding up tiny mistakes, instead focusing on how tiny they are on an individual level.
Economists have some other defences, but I interpret them as own goals. For example, there is the argument that, under perfect competition, firms are price takers by assumption. They cannot have any effect on price, by assumption. But this basically amounts to assuming the price is set by an exogenous central authority, which is odd for a model of perfect competition.
Another argument is that setting MC=MR itself is an assumption. This is a strange path to take for a theory that prides itself on internal consistency and profit maximisation. It acknowledges that MC=MR will not quite maximise profits, so amounts to the assumption that firms are not profit maximisers. There is also the similar argument that firms don’t take Keen’s problems into consideration in real life, so they don’t matter. This is a huge own goal, given most textbooks argue that it doesn’t matter what firms do in real life. I’m quite happy to acknowledge it does matter how they actually price – but that would involve abandoning the marginalist theory of the firm and using cost-plus pricing.
So, now that we have all finished discussing how many angels can dance on a pinhead (turns out it was slightly fewer than economists thought), let’s just start using more realistic theories of the firm and forget the mess that is marginalism.
Cambridge Capital Controversies
There are swathes of literature on this and I cannot hope to explore them all. The main thing I have noticed, and want to discuss, is that economists only seem to focus on capital reswitching when discussing this, and defer to empirical evidence to suggest it is negligible. I have a few problems with this:
(1) Empirical evidence is competing and some evidence suggests reswitching is more common than economists would like to think. Furthermore, it is incredibly hard to observe and therefore cannot be dismissed so easily.
(2) Most importantly, the Capital Controversies were not just, or primarily, about reswitching. Sraffa showed a number of things: demand and supply are not an adequate explanation for static resource allocation; the distribution between wages, profits and other returns must be known before prices can be calculated; factors of production cannot be said to be rewarded according to ‘marginal product’. For me these are more important, and are applicable to many models used today, such as Cobb-Douglas and other production functions, and the Solow Growth model.
With all 3 of the examples I have discussed, economists have tried to defer to empirical evidence to dismiss the problems with their causal mechanics. But generally economists do not regard empirical evidence about causal mechanics as important (the primary example being the theory of the firm), instead insisting on rigorous logical consistency. Surely, in order to be completely logically consistent, economists should at least be willing to experiment with the potential effects of SMD and reswitching in general equilibrium models and see what happens? Robert Vienneau has various discussions of this.
The common thread between these is that economists seem incredibly adept at assuming their conclusions. Of course, you can get around any critique with an appropriate assumption, but as I’ve discussed, theories are only as good as their assumptions and assumptions should not be used simply to protect core beliefs and come to palatable conclusions. Having said that, Keen’s book isn’t perfect (which is to be expected if you try and take every aspect of economics on in one book), and there are worthwhile criticisms out there. Nevertheless, Keen’s critique as a whole remains in tact, and leaves very little of what is taught on economics courses left in its wake.
P.S. Feel free to use the comments space to discuss any critiques of areas I have not covered/said I will cover.
I have never thought of the macroeconomic production function as a rigorously justifiable concept. … It is either an illuminating parable, or else a mere device for handling data, to be used so long as it gives good empirical results, and to be abandoned as soon as it doesn’t, or as soon as something else better comes along.
- Robert Solow
When speaking about production and output, economists generally refer to ‘factors of production;’ things are inputted into the production process to produce something else. Most of the time, they use the two factors ‘capital’ and ‘labour.’ They are a firm’s presumed inputs in theories of the firm and supply curves, where a firm takes their values as inputs and, after some mathematical manipulation, produces a certain amount of output. They are also used in a macroeconomic model known as a ‘production function,’ which does something similar for the entire economy. There are various different production functions that use different maths, and include other variables such as technology or productivity – the most famous one is known as Cobb-Douglas.
The problem with this form of estimation is that it has long been known to be logically questionable. Anyone who has taken a science class past a basic level will know that checking your units – that they are consistent and balance out on both sides of the equation – is emphasised repeatedly. But this seems to be thrown out of the window in the basic analysis of production functions and firm behaviour.
The analysis of production takes two physical inputs – most likely capital and labour. Generally, the inputs are also assumed to be clay-like; available in infinitely small quantities. The inputs are combined (as far as I can see, this means flung together inside a black box) and produce a physical output of some other good, which is of course also infinitely divisible and clay-like. Labour is measured in terms of hours of work; capital in terms of money. This is where the problems start.
The Cambridge Capital Controversies revealed many problems with using a monetary value to measure capital equipment, certainly within a theory of distribution. However, there is another, far more simple and perhaps more fundamental objection: by definition, we are supposed to be measuring physical units of input. This means it is simply not coherent to measure in terms of cost. If we were to opt for measuring in terms of cost as a rule, then what would be the justification for not lumping labour in with capital, and just having a single input, perhaps labelled ‘stuff’? The answer is the justification for not doing the same with capital.
If we decide to use physical inputs, it seems there are ways around the problem. Instead of labelling one input ‘capital,’ we could consider a certain type of capital good – say, shovels with which to equip some ditch-digging labourers. It is fair to assume these are roughly the same and so we can add them up. However, this method lays bare problems that the blanket term ‘capital’ previously obscured.
First, we clearly need more than just people and shovels to dig a ditch. We might need wheelbarrows, land, a skip, sustenance for the labourers, transport for labourers, perhaps a supervisor – in fact, there is potentially an incredibly large amount of factors of production, something I’ve noted before. It becomes computationally difficult or even impossible to include everything that contributes to production, and some factors will simply be immeasurable.
Second, it is clear that these objects are not perfectly divisible. In the examples of ‘capital’ and ‘labour,’ we could divide both money and labour time into infinitely small units. But once we allow for production being ‘lumpy,’ functions are no longer smooth and differentiable, and as such marginal productivities simply do not make sense.* Furthermore, this belies the idea of an elasticity of substitution – the rate at which you can substitute one input for the other – since taking away a ‘lump’ will simply make output fall to zero (this is also something I’ve touched on before).
Economists will likely have various rebuttals to this style of thinking. The most used will be that Cobb-Douglas and various theories of the firm make good, testable predictions. But actually their predictions leave a lot to be desired – firms do not behave how economists predict, and the Cobb-Douglas production function has poor empirical results (economists generally refer to the initial estimations made by the creators of the model, but things have changed since then).
The other defense will be similar but not quite the same: it is just a simplification, used to illuminate a particular aspect of a problem. Well, the fact is that making counterfactual assumptions about the nature of a system does not illuminate anything; it simply tells us about a different universe. Furthermore, simplifications cannot be internally consistent. Even within the logic of ‘labour’ and ‘capital,’ it has been shown repeatedly that the conditions under which either of them can be aggregated are incredibly stringent. Similar arguments apply to other aggregate parameters used by economists, such as aggregate measures of technology or productivity.
Simple macroeconomic production functions smack of trying to turn macro into ‘applied microeconomics.‘ But it has repeatedly been shown that aggregation problems will always be present, and that it is best to study emergent phenomena rather than try extrapolate microeconomic parameters until they have no real meaning. At the other end, microeconomic production is just an attempt to reduce everything to ‘rigorously’ derived smoothly differentiable intersecting lines, rather than simply accepting empirical realities about firms and micro behaviour, and opening up the firm to see what happens inside instead of treating it as a black box.
Overall, it seems the whole idea of production functions and factors of production as anything other than vague, qualitative concepts is something of a dead end.
*I similarly expect that, once we allow that preferences may be lumpy, utility functions are no longer smooth. But lumpy preferences is something for another time.