Posts Tagged Economic theory
I rarely (never) post based solely on a quick thought or quote, but this just struck me as too good not to highlight. It’s from a book called ‘Capital as Power’ by Jonathan Nitzan and Shimshon Bichler, which challenges both the neoclassical and Marxian conceptions of capital, and is freely available online. The passage in question pertains to the way neoclassical economics has dealt with the problems highlighted during the well documented Cambridge Capital Controversies:
The first and most common solution has been to gloss the problem over – or, better still, to ignore it altogether. And as Robinson (1971) predicted and Hodgson (1997) confirmed, so far this solution seems to be working. Most economics textbooks, including the endless editions of Samuelson, Inc., continue to ‘measure’ capital as if the Cambridge Controversy had never happened, helping keep the majority of economists – teachers and students – blissfully unaware of the whole debacle.
A second, more subtle method has been to argue that the problem of quantifying capital, although serious in principle, has limited practical importance (Ferguson 1969). However, given the excessively unrealistic if not impossible assumptions of neoclassical theory, resting its defence on real-world relevance seems somewhat audacious.
The second point is something I independently noticed: appealing to practicality when it suits the modeller, but insisting it doesn’t matter elsewhere. If there is solid evidence that reswitching isn’t important, that’s fine, but then we should also take on board that agents don’t optimise, markets don’t clear, expectations aren’t rational, etc. etc. If we do that, pretty soon the assumptions all fall away and not much is left.
However, it’s the authors’ third point that really hits home:
The third and probably most sophisticated response has been to embrace disaggregate general equilibrium models. The latter models try to describe – conceptually, that is – every aspect of the economic system, down to the smallest detail. The production function in such models separately specifies each individual input, however tiny, so the need to aggregate capital goods into capital does not arise in the first place.
General equilibrium models have serious theoretical and empirical weaknesses whose details have attracted much attention. Their most important problem, though, comes not from what they try to explain, but from what they ignore, namely capital. Their emphasis on disaggregation, regardless of its epistemological feasibility, is an ontological fallacy. The social process takes place not at the level of atoms or strings, but of social institutions and organizations. And so, although the ‘shell’ called capital may or may not consist of individual physical inputs, its existence and significance as the central social aggregate of capitalism is hardly in doubt. By ignoring this pivotal concept, general equilibrium theory turns itself into a hollow formality.
In essence, neoclassical economics dealt with its inability to model capital by…eschewing any analysis of capital. However, the theoretical importance of capital for understanding capitalism (duh) means that this has turned neoclassical ‘theory’ into a highly inadequate took for doing what theory is supposed to do, which is to further our understanding.
Apparently, if you keep evading logical, methodological and empirical problems, it catches up with you! Who knew?
How economics is taught has been the subject of a lot of debate recently. Although there have been a lot of good points made, in my opinion Andrew Lainton‘s recent blog post hits the nail on the head: we need to begin economics education with a discussion of key, contested ideas.
Starting with contested ideas has a few major benefits. First, it immediately shows students what economics is: a subject where there is a lot of disagreement, and where key ideas are often not well understood, even by the best. Second, it allows students to grapple with the kinds of critical questions that, in my experience, people generally have in mind when they think of ‘economics’: where do growth, profits come from? How do things ‘work’? Third, it allows us to intertwine the teaching of these concepts with economic history and the history of thought.
Lainton’s key contested idea is savings: how naive national accounting might make you believe that saving instantly create investment; how Kalecki and Keynes showed that it’s closer to the other way around; and onto modern debates that add nuances to these simplified expositions. Naturally, this would also tie in with debates about the banking system, loanable funds and endogenous versus exogenous money. On top of ‘savings’, I can think of quite a few other important economic ideas that are not agreed upon, but are central to the discipline:
Decision making and expectations
How do people make decisions? This question is clearly central to economics, as any economic model that explicitly includes agents must make some assumption about what drives these agents’ decisions. In modern economics, an agent’s decision rule generally rests on seeking some form of ‘gain’, whether subjective satisfaction or simply units of money. Economists themselves have also, to their credit, pushed behavioural and even neurological investigations into decision making. However, much of this has yet to filter down to the main models/courses, even though it should really be at the forefront of economic modelling.
All too often, the most mathematically tractable models such as utility maximisation and rational expectations are simply assumed, perhaps with caveats, but not with any real discussion of whether they represent human behaviour. Well established psychological characteristics and behavioural heuristics/biases are ignored, even though they may alter the analysis of choice in fundamental ways. Public officials are often assumed to follow behaviour that creates their personally preferred outcome, despite important evidence to the contrary. It is assumed the public understands the fundamentals of the economy, even though a lot of evidence suggests this is way, way off. Decisions in the workplace that concern morale, hierarchy and norms are often disregarded, despite evidence that they are of utmost importance.
However, my point isn’t necessarily about which models are right or wrong. It’s that these debates about how people act, and based on which motives and expectations, are not only incredibly interesting but are incredibly important. Such debates could also tie in with a comprehensive discussion of the Lucas Critique – not as a binary phenomenon that can be solved with microfoundations, but as an ongoing problem that requires us to evaluate the way the parameters of the economy change over time and with policy, culture and so forth. This would allow students to see how the economy evolves, and how its behaviour depends on fundamental questions about human behaviour.
Theories of value underlie economic theories, whether economists like it or not – in fact, it’s pretty difficult (impossible?) to judge the “performance” of the economy without a theory of value. Classical economics was built on the Labour Theory of Value (LTV), and distinguished between the price of an object (exchange-value) and its value to whomever used it (use-value). Marginalist economics is built on the Subjective Theory of Value (STV), which tends to combine use and exchange value into mathematically ordered preferences. GDP calculations simply measure ‘value added’ as a monetary quantity. There are also other, albeit less popular, theories of value, such as those based on agriculture and energy.
A crucial point here is that the concept of ‘value’ is not necessarily well-defined, and each theory of value generally has something slightly different in mind when they use it. For the (Marxist) LTV,value refers to an objective quality: the total productive ‘value’ in the economy, which is expressed as an exchange relationship between commodities, and originates solely from labour. For the STV , value refers to the subjective ‘surplus’ gained from transactions, which neoclassical theory seeks to optimise to maximise social welfare. For theories of value based on the natural sciences, value refers to more physical qualities, such as how energy is transformed in production and the limits to this process. However, the common ground between theories is the question of how we create more than we had – and what to do about it.
I expect a lot of economists would regard the STV as largely obvious and not up for debate, but if it’s so obvious and important that’s even more reason to study it explicitly – after all, Newton’s Law’s are not tucked away underneath classical physics: they are explicit, and their empirical relevance is frequently demonstrated to students. Clearly, we can’t demonstrate the empirical relevance of a theory of value (hey, it’s almost as if economics is not a science!) but we can discuss it in depth and how it is a relevant and necessary backdrop to formulating theories about utility, surplus and profit.
What is economics?
It’s a testament to how contested the field of economics is that even the definition is not agreed upon. Open a ‘pop‘ economics book and you’ll find a definition such as “the study of how people respond to incentives”. Another popular mainstream definition is “the allocation of scarce resources” or even “satisfying unlimited wants with scarce resources“. Classical economics – and more recently, Sraffians – considers economics the study of how society reproduces itself. Austrians might give you a definition that says something about human action and the market system. The definition given by Wikipedia is “the study of production, distribution and consumption”. I’m sure there are many more out there.
Agreeing on a definition of economics would put the discipline on surer footing. Right now it occupies a space where it is simultaneously used as an all encompassing worldview, and as a very narrow toolkit that only investigates one or two things at a time (I expect many economists would basically consider themselves applied statisticians or econometricians). I sometimes even find that economists fall back on defining economics by “what economists do”, which is a rather weak (and circular) definition. Given that we are not even sure which problems economic theories are designed to understand and solve, is it any wonder people can’t agree on which ones to use?
This post is by no means exhaustive. Off the top of my head, some other relevant contested ideas might be: capital; money; how to measure the economy; different economic systems; institutions; policy and economists’ relationship with it. This kind of approach is surely better for furthering students’ understanding than simply teaching a set of abstract theories which are labelled ‘economics’, often with little critical engagement. It would open students’ minds to the kinds of difficult and relevant questions that are currently either shied away from, or only open to those who have completed an Economics PHD. I expect many would also leave with an understanding of economics closer to what students currently expect (and do not really get) from an economics education.
I have a new article in Pieria, arguing that the image of mainstream economists as rabid free-marketeers is not entirely without foundation:
There is quite a disconnect between mainstream economics as seen in the public eye and as seen by economists themselves. A lot of media criticism of economics – and the Guardianseems to be going mad on this recently - paints mainstream economic theory as supporting a ‘free market’ or ‘neoliberal’ worldview, possibly in cahoots with the elites, and largely unconcerned with human welfare. Economists tend to switch off in the face of such criticisms, arguing that the majority of them, along with their theories, do not support such policies…
…Yet I think there is a good argument to be made, not that mainstream economics necessarily implies particular policies, but that it is easily utilised to push a certain worldview, based on which questions it asks and how the answers are modeled and presented. This worldview is what the public and journalists all too frequently encounter as ‘economics’, which is why they often conflate neoclassical with neoliberal ideas.
An interesting question – which I do not explore in the article, but have written about before, as has Peter Dorman – is the disparity between ‘econ101′ rhetoric and what economics actually implies. ‘Economics’ in the public image is generally used to justify counterintuitive or unpalatable ideas like the minimum wage and austerity, even though arguing unambiguously for them – particularly the latter – is a position that is actually quite ignorant of ‘economics’ as a field.
Do I blame economists for this? Partly: I think economists should be more worried about their public image, whereas you often get the impression they are more concerned with being enlightened technocrats than anything else. However, politicisation isn’t unique to economics (consider climate change denial or evolution/religion), so it’s a bit unfair to single out economists in that sense. Having said that, 99% of scientists in the former fields are united against the pseudo-scientific caricatures of them in the media, whereas economists are far less able to convey a clear message to the public. In short, perhaps economists should figure things out amongst themselves before they rattle off lists of policy proposals based on their models.
Anyway, enjoy the piece!
Something has been bothering me about the way evidence is (sometimes) used in economics and econometrics: theories are assumed throughout interpretation of the data. The result is that it’s hard to end up questioning the model being used.
Let me give some examples. The delightful fellas at econjobrumours once disputed my argument that supply curves are flat or slope downward by noting that, yes, Virginia, in conditions where firms have market power (high demand, drought pricing) prices tend to go up. Apparently this “simple, empirical point” suffices to refute the idea that supply curves do anything but slope upward. But this is not true. After all, “supply curves slope downward/upward/wiggle around all over the place” is not an empirical statement. It is an interpretation of empirical evidence that also hinges on the relevance of the theoretical concept of the supply curve itself. In fact, the evidence, taken as whole, actually suggests that the demand-supply framework is at best incomplete.
This is because we have two major pieces of evidence on this matter: higher demand/more market power increases price, and firms face constant or increasing returns to scale. These are contradictory when interpreted within the demand-supply framework, as they imply that the supply curve slopes in different directions. However, if we used a different model – say, added a third term for ‘market power’, or a Kaleckian cost plus model, where the mark up was a function of the “degree of monopoly”, that would no longer be the case. The rising supply curve rests on the idea that increasing prices reflect increasing costs, and therefore cannot incorporate these possibilities.
Similarly, many empirical econometric papers use the neoclassical production function, (recent one here) which states that output is derived from the labour and capital, plus a few parameters attached to the variables, as a way to interpret the data. However, this again requires that we assume capital and labour, and the parameters attached to them, are meaningful, and that the data reflect their properties rather than something else. For example, the volume of labour employed moving a certain way only implies something about the ‘elasticity of substitution’ (the rate at which firms substitute between labour and capital) if you assume that there is an elasticity of substitution. However, the real-world ‘lumpiness‘ of production may mean this is not the case, at least not in the smooth, differentiable way assumed by neoclassical theory.
Assuming such concepts when looking at data means that economics can become a game of ‘label the residual‘, despite the various problems associated with the variables, concepts and parameters used. Indeed, Anwar Shaikh once pointed out that the seeming consistency between the Cobb-Douglas production function and the data was essentially tautological, and so using the function to interpret any data, even the word “humbug” on a graph, would seem to confirm the propositions of the theory, simply because they follow directly from the way it is set up.
Joan Robinson made this basic point, albeit more strongly, concerning utility functions: we assume people are optimising utility, then fit whatever behaviour we observe into said utility function. In other words, we risk making the entire exercise “impregnably circular” (unless we extract some falsifiable propositions from it, that is). Frances Wooley’s admittedly self-indulgent playing around with utility functions and the concept of paternalism seems to demonstrate this point nicely.
Now, this problem is, to a certain extent, observed in all sciences – we must assume ‘mass’ is a meaningful concept to use Newton’s Laws, and so forth. However, in economics, properties are much harder to pin down, and so it seems to me that we must be more careful when making statements about them. Plus, in the murky world of statistics, we can lose sight of the fact that we are merely making tautological statements or running into problems of causality.
The economist might now ask how we would even begin to interpret the medley of data at our disposal without theory. Well, to make another tired science analogy, the advancement of science has often not resulted from superior ‘predictions’, but on identifying a closer representation of how the world works: the go-to example of this is Ptolemy, which made superior predictions to its rival but was still wrong. My answer is therefore the same as it has always been: economists need to make better use of case studies and experiments. If we find out what’s actually going on underneath the data, we can use this to establish causal connections before interpreting it. This way, we can avoid problems of circularity, tautologies, and of trapping ourselves within a particular model.
My newest article at Pieria provides an overview of the post-Keynesian theories of consumers, producers, money/banking and trade:
A common charge directed at heterodox economics is that it is defined as a negative and has little to offer in the way of an alternative to mainstream economics (at least, if we ignore the ‘extremes’ of Austrianism and Marxism). It’s true that heterodox economists, including myself, often spend more time criticising mainstream economics than we do offering alternative theories. Yet there is in fact a large amount of work on alternative theories of pricing, distribution, finance and trade. Below I will sketch out what is known as the ‘Post-Keynesian’ (PK) approach to economic theory….
The summary echoes what I’ve said before about the difference between mainstream and heterodox economics:
First, post-Keynesians tend to emphasise that key variables (wages, the rate of interest) are monetary, not real phenomena. This doesn’t mean the notion of the real is unimportant – far from it – but it does mean that it is often a poor starting point for analysis. Second, there is generally no special status accorded to particular variables. Consumers and producers are not ‘optimising’; trade between countries can be imbalanced for long periods of time; the economy can remain in a state of depressed demand and no adjustment of prices will save it. Third, there is a lot of emphasis on institutional considerations. Since prices, demand and trade depend somewhat on social norms and agreements, and since agents tend to fix their decisions for long periods of time to maintain a degree of certainty, different economic trends can persist based on historical path dependence, and there is no ‘one size fits all’ model.
I have to say that I’m not sure why post-Keynesians don’t spend more time on this stuff. I find the theories pretty comprehensive and quite obviously more grounded in reality than the neoclassical approach.
A short while ago, I tweeted that a show entitled “Economists Say the Funniest Things” – where economists opine on issues outside their domain – would make for good watching by those of us who inhabit the real world. Unfortunately, I can’t afford a show, so I’ll settle for a blog post.
I have previously posted on ‘economic imperialism’, but the examples in this post are, to put it bluntly, less serious, often crossing the line into simply silly. Economists sometimes like to transpose their incentive-driven, utility-maximising agents onto complex social problems, and claim that they have discovered the elegant, underlying mechanics underneath all the noise that the other social sciences study. They will also argue that those who object to their framework do it simply because they don’t like or understand maths, or because they can’t stomach the often unpalatable conclusions of the model. In fact, it is these economists who are seemingly unable to comprehend the phenomena they purport to study, preferring instead to solve equations, which they label ‘models’, but which do not actually ‘model’ the world at all, and which often seem to lead the economist to ridiculous conclusions.
I will put a standard disclaimer out there: I’m not so much attacking the entire economics profession as the ‘pop’ economics that you find in books, on the internet and, sadly, sometimes in policy circles. I hope many economists – those who are able to comprehend history, the complexity of human behaviour and above all the difference between models and reality – will find these examples equally absurd.
Economists do psychology
Naturally, the sometimes infuriating Freakeconomics craze could warrant an entire post, as their ‘antics’ have angered many, including other economists. However, I am going to focus here on one of their less covered arguments: a story about incentives, which is at the beginning of their first book. It provides a nice introduction to wrongheaded economic imperialism, as this wooden insistence on how, underneath everything, people are essentially driven by clear incentives, underlies many of economist’s attempts to try their hand at other disciplines.
The story goes like this: an Israeli day care centre found that parents were picking up their children too late, so they introduced a small charge of $3 to try and disincentivise lateness. However, instead of discouraging this behaviour, the payment served to legitimise it and buy the parents piece of mind. The result was that lateness actually increased. Bizarrely, the Freakonomics duo decided that this story is consistent with economist’s way of thinking, and used it as an introduction to the idea that “incentives matter”. They argue that people actually face three different types of incentives: economic, moral and social. The idea is that the charges “substituted an economic incentive for a moral incentive (the guilt)”, with the implication that the daycare centre simply didn’t get the amount right. However, if this were true, treating guilt would be as simple as paying somebody that you had wronged.
The way people respond to incentives is in fact highly complex and unpredictable. Incentives that are too big or too small can have perverse effects. What’s more, how people will respond to any incentive depends on the perceived motives of the person offering it, and the implied motives of the person receiving it. Studies show that incentives can easily backfire if these motives are questionable, something that has had an impact on the field of organ donation: when people were offered money for donating, donations decreased. People simply no longer felt that they were helping people, only that they were making a bit of money. The Freakonomics guys do not engage with any of these well established psychological tendencies; they simply select three arbitrary and incommensurable concepts and proceed as if their analysis were obviously true. But it’s clear that, contrary to their mantra, claiming to be able to predict people’s response to incentives with certainty is simply a fool’s game.
Economists do history
Historians – at least, those who aren’t Niall Ferguson – try to emphasise context, combat euro-centric (and therefore usually capitalism-centric) narratives, and endlessly struggle against ‘Whig’ history, which suggests that history has naturally culminated in contemporary societies. History is therefore a prime stumbling ground for economists, whose models generally take place in a theoretical ‘vacuum’, take capitalist institutions and social relations as a given, and often model the economy as a deterministic time path or as in equilibrium. It seems that economists tend to see the ‘people respond to incentives’ behaviour outlined above as underlying history, and therefore believe that events naturally culminated in capitalistic behaviour; of course, the corollary is that deviations from this were caused by bad policy, externally imposed by governments.
This type of thinking is clear in Evsey Domar’s serfdom model, which attempted to explain the end of serfdom through notions of its profitability to the landowner. The model argues that if land is too plentiful relative to labour, this results in competition among landlords for workers, which drives wages up, and subsequently it becomes more profitable for landlords to use the institutions of serfdom and slavery to ‘put down’ labourers, rather than employ them for wages. Conversely, if land is scarce relative to labour, wages will remain low enough for wage labour to be profitable, and serfdom and slavery will disappear. Domar suggested that this explained the end of serfdom in Russia in the late 19th Century.
To be fair to Domar, he was more than ready to acknowledge the limitations of this model. One person who was less so, however, was Paul Krugman, who has used it as an illustration of why he considers economics the superior framework for social science. According to Krugman, models like Domar’s are an indication of how economics is “rigorous” and makes “generally correct predictions”. This latter characterisation is especially bizarre, because Krugman goes on to acknowledge that there are large areas of history the Domar model doesn’t explain, such as why serfdom was not reinstituted in Europe after the Black Death wiped out a large amount of the labour force, pushing up wages. According to Krugman, events such as these are “puzzles”. Surely they are just an indication that economist’s framework isn’t so great?
In fact, the Domar model actually doesn’t do a great job of explaining its prime example of 19th century Russia. The serf agreement was not simply forced onto peasants, but was a three way deal between the state, landlords and peasants: peasants had rights and were in many ways ‘free’, as long as they produced enough for the gentry, who were subsequently available for the military. What’s more, the 1861 ‘emancipation’ from serfdom was not instituted by the landlords based on considerations of profitability; the move was centrally directed by the state, based mostly on imperialist/defensive considerations after the Russian defeat in the Crimean War. Many landlords were resistant to the change, and though the legislation was passed a large number of restrictions remained, some effectively extending serfdom.1 Overall, the incentive-based behaviour outlined by Domar is irrelevant to the broader story of social and political change.
The root of the issue is the assumption is one that is not atypical in economics: the idea that the capitalist institution – in this case wage labour – is the ‘natural‘, underlying tendency, upon which artificial institutions like slavery and serfdom are ‘forced’. Indeed, Domar repeatedly refers to the wage labourer as the “free man”. But history shows us there are no natural, underlying institutions: capitalist, feudalist and slave(ist?) institutions are all complex, and their introduction is fragmented. Therefore, at worst, the Domar model is trivial: it suggests that if wage labour, serfdom and slavery are all easily available to landlords, they will choose the one most beneficial to them (in fairness, Domar acknowledges in one place that we might “question the need for [his model]“). However, you don’t need an economist to tell you this, and neither would they be able to tell you how such a situation arose in the first place. A historian would.
The next example continues our journey through Russian history, though perhaps that is stretching the definition of the word ‘history’. This one reminds me of a story – probably an urban myth – about a student at the University of Chicago, who fell asleep in one of Milton Friedman’s lectures. Friedman was furious, and demanded the student answer whichever question he had just asked. The student responded “I don’t know the question Professor Friedman, but the answer is a change in the money supply”. It’s a funny joke, until you realise that economists (in this case Irving Fisher*) actually write things like this:
There you have it, folks: belief in socialism is a monetary phenomenon. This is despite the fact that Russia, the centre of Bolshevism, wasn’t really capitalist at the time of its revolution but mostly feudalist, making Fisher’s discussion of workers and employers bargaining largely irrelevant. It was actually the increasing scarcity of land and food – not the instability of money – which robbed peasants of their lot. Fisher’s account also ignores the undeniable role of World War 1 (elsewhere as well as in Russia), which devastated large areas of the country and created an armed, disenchanted underclass accustomed to conflict. Contrary to what Fisher implies, I’m pretty sure that an oppressive regime drafting you for a largely pointless war, or taking away what little you have, does not only “appear to be social injustice”, but is social injustice, and is peripheral to “changes in the buying power of money”, itself usually symptomatic of broader instability – economic or otherwise.
An economist does sociology (and more)
Perhaps nobody better characterises the term ‘Economic Imperialism’ than University of Chicago economist Gary Becker. Becker has used economist’s toolkit to craft theories for everything from crime to addiction to the family, and in fact he won the Sveridges Riksbank Prize for his efforts (yes, it’s a fake Nobel yada yada). Naturally, Becker’s models were praised because they were rigorous and mathematical (a quick google search will reveal multiple people fawning over him for god knows what reason). While Becker himself is quite modest and seemingly well intentioned, his theories about human behaviour are so far from the truth it’s a wonder they have garnered any respect at all.
The first of these, Becker’s theory of ‘Rational Addiction‘ (amusingly parodied in this video), suggests that those who are addicted to drugs are just following a rational long term utility-maximisation plan. This is the sort of thing that a normal person looks at and goes “erm, no”, as it is completely at odds with the internal and external struggles that addicts commonly face. “I’m just maximising my satisfaction” sounds like something an addict will tell you, but analysis of addiction generally has to go beyond that to be of any use.
It almost goes without saying that do not plan their addiction because they think it will maximise their future satisfaction, and is well established people in general do not behave this way. Some economists have tried to use vague data points – such as the evidence that smokers adjust their habits due to expected tax increases – to ‘show’ people are rational and forward looking. However, it is obviously a leap from this highly stylised behaviour to suggest that smokers are perfectly rational and informed forward looking utility maximisers. In fact, the observed behaviour of addicts suggest that addiction is generally involuntary, and people become addicted because they are unaware of, or underestimate, the risks of addiction. Often it is not clear why people are addicted, even (or especially) to themselves.2
On top of this, the actual mechanics of addiction used in the theory are questionable. ‘Rational addiction’ occurs because past consumption of something builds up a ‘stock’ (with typically undefined units), increasing the pleasure you get from consuming it now. However, in the real world addiction is far more complex than this, and is associated with numerous, sometimes conflicting effects. For example, the theory of rational addiction cannot explain the ‘empty compulsion’ addicts feel once the brain has adapted or become satiated, resulting in a disappearance of the ‘high’, but not of the desire to continue, even if the addict’s conscious brain conflicts with this desire. What’s more, different drugs create different reactions inside the brain (not to mention psychological reactions): opiates like heroin tend to mimic certain neurons, whereas alcohol inhibits the brain’s ability to release (and coordinate the release of) neurons. These are disparate processes that cannot be captured by economist’s utility. Conversely, neurologists, psychologists and social workers have models that can explain such nuances, which are certainly the ones I’d turn to if I wanted to understand and deal with addiction.**
Becker’s second major theory of human behaviour is New Home Economics, or the theory of the family, which started with Becker’s 1965 paper on the allocation of time and culminated in his 1981 Treatise on the Family. As would be expected, the theory models families as a collection of rational agents optimising various preferences and operating according to their respective specialisations, and so it can easily be criticised along previously mentioned lines. However, I will not go over these arguments again.
Instead, the critiques I find of interest here are those by feminist economists, who generally take issue with Becker’s almost hilariously stereotypical depiction of the family. The head of the household – implied to be a man – is modeled as an ‘altruistic’, breadwinning agent who coordinates everything and makes sure it is OK, while the rest of the family accept his judgment as in their best interests (in other words, he is a benevolent dictator). Housework is done by the woman (as women have a ‘comparative advantage’ in housework), and is not counted as a contribution to the family pot, implying that said work is not similarly ‘altruistic’. One is forced to wonder whether the theory would be more suited to the 18th or 19th centuries – clearly, it precludes the study of non-traditional families. A real household that looked like this would probably be classed as abusive.
The theory has many other conceptual and explanatory problems. It could be viewed as an attempt to deal with the troublesome existence of the family unit by arguing it can be represented by a single optimising agent, similar to the way some perfectly competitive models deal with the firm. Economist Barbara Bergmann noted that the theory seems to lead to the “conclusion that the institutions depicted are benign, and that government intervention would be useless at best and probably harmful.” Yet this depiction is completely at odds with the obvious fact that families often exhibit conflicting or self destructive behaviour. Bergmann goes further, arguing that Becker’s theory more generally leads to “preposterous conclusions”, among which is the ‘economic argument’ that women should embrace polygamy, and the idea that the decision to have children is only a function of parent’s ‘altruism’ and of the rate of interest. While the theory may be vaguely consistent with a few stylised facts about how income affects families, these are largely trivial and do not need Becker’s theory to explain them.
The third and final theory is Becker’s theory of crime, which unsurprisingly argues that criminals simply perform crimes because the benefits outweigh the costs. Criminals were said to calculate the ‘expected utility’ of a crime, which multiplies the probability of being caught times the price for being caught. Becker’s cost-saving solution was to increase penalties but reduce enforcement, and also to increase enforcement of more costly crimes (which, in practice, means increasing enforcement in wealthy areas and decreasing it in poor areas).
To be fair, Becker always warned against implementing an extreme version of his view, but as is often the case the caveats were not taken on board and ideas like his seemed to have a substantial (negative) impact on law enforcement (the fact that Becker has a blog with notable judge Richard Posner should be a clue that he has an influence on the legal profession). Over the 1970s and 80s, law enforcement seemed to follow the Chicago-style prescriptions: punishments were increased, with mandatory sentencing introduced and incarceration rates rising. Meanwhile, particularly in cities, the number of police officers was reduced, as was general enforcement and surveillance. The well-documented wave of crime that followed/coincided with this, culminating in the late 1980s, led to the realisation that this approach was flawed, at which point different approaches to law enforcement were taken and crime started to go down. (I’m not going to go over the Freakonomics abortion explanation for this, though this paper has been acknowledged to show that at the very least the effect was smaller than they thought).
Criminologists generally find that combating crime requires the opposite approach to the one Becker had in mind: frequent enforcement, modest penalties (note: commenter ‘TheHobbesian’ helpfully provides a link to ‘situational crime prevention‘, which is apparently gaining followers). It turns out that real criminals are not so bothered by the punishment for a crime, within reason, but by the likelihood of being caught. Most criminals do not even consider the punishment at all when committing a crime, particularly because many of them are under the influence of drugs when they do it. What’s more, punishments that are too severe can backfire, either because they end up being impossible to enforce or because, if a punishment is severe enough, a criminal may as well commit a more heinous crime. I expect an economist like Becker might respond that this just shows that criminals have ‘interesting utility functions’. I would respond that they need to get a grip on reality.
Economists are prone to thinking their framework is neat, useful and even universal, but actually it is just quite a naive and one-dimensional view of human behaviour. When economists take their toolkit to other social sciences, they’d like to believe that they ‘simplify’ in such a manner that they get to the ‘underlying’ mechanics of issues; but they actually ‘simplify’ in such a manner that they often assume everything relevant away. This may make for compelling mathematics and entertaining books, but when we actually venture out into the real world these theories at best only to touch on the surface of the story; at worst, they simply become absurd.
**A part of me says that someone like Becker probably wouldn’t rely on his theory, either. There is a joke about an academic economist who was offered a position at another university, and was conflicted about the choice. One of his students asked him why he didn’t simply choose the rational option. Puzzled, the professor responded “come on, this is serious”.
1. Crime, Cultural Conflict, and Justice in Rural Russia, 1856-1914 by Robert Frank, pp. 6-7
2. The Theory of Addiction by Robert West, pp. 32-36, 75
My latest article, trying to sum up the problems with economist’s approach – in 3 words, “it’s too narrow”:
The question of whether mainstream (neoclassical) economics as a discipline is fit for purpose is well-trodden ground…
….[I think] economic theory is flawed, not necessarily because it is simply ‘wrong’, but because it is based on quite a rigid core framework that can restrict economists and blind them to certain problems. In my opinion, neoclassical economics has useful insights and appropriate applications, but it is not the only worthwhile framework out there, and economist’s toolkit is massively incomplete as long as they shy away from alternative economic theories, as well as relevant political and moral questions.
As Yanis Varoufakis noted, it is strange how remarkably resilient the neoclassical framework is in the presence of many coherent alternatives and a large number of empirical/logical problems. However, I actually think this is quite normal in science – after all, it is done by humans, not robots. Hopefully things will change eventually and economics will become more comprehensive/pluralistic, as I call for in the article.
It’s good to sum up my overall position, but I think I’ll probably lean more (though not entirely) towards positive approaches from now on, some of which I mention in the article. Though I strongly disagree with Jonathan Catalan that heterodox economists are “more often wrong than right”, I agree with his sentiment that it’s probably better to “sell [one's] ideas” that to endlessly repeat oneself about methodology and so forth. So maybe expect a shift from general criticisms of economics to more positive and targeted approaches!
PS Having said that, my next post definitely doesn’t fit this description.
What is the role of ideology in shaping how businesses go about their everyday operations?
Generally, economic theories of the firm - particularly at undergraduate level – imply that businesses have clear aims and a clear way to go about those aims. This might be the basic profit maximisation; it could be growth; it could be market share. In some models it’s not quite as clear for the firm as a whole – the objectives of managers and shareholders can conflict, for example – but it is at least the case that each agent has clear objectives, subject to some constraints.
However, the real world is rarely so certain. While it is obvious that capitalist firms throughout history have the overarching aim of making money, the way to achieve this is not always clear, particularly if we are talking about long term strategies. For example, could it be that being a “socially responsible” firm will increase business from sympathetic customers? Or that higher wages, better working conditions and so forth, which seem costly, will actually increase employee productivity? The history of how firms have worked seems to suggest that firms as a whole – or capitalism, if you like – is susceptible to waves of ideology about the ‘right’ way to do business.
Consider the American School of Economics, which was the chief ideology and policy of the USA during its industrialisation period. This was a highly protectionist school, which focused around maintaining domestic competitiveness and employment. High wages, good education and healthcare for the workers were encouraged, both for humanitarian reasons and as a way to increase productivity and make business more profitable. It was not only required that government policies were set up in a certain way – tariffs, public services, employment rights – but also that these policies had popular support. Business generally shared in the idea that well paid employees would be more productive, something epitomised by Henry Ford’s famous doubling of his worker’s wages.
The result of this policy was a large, profitable domestic sector and consistent increases in real wages, allowing the USA to outperform the UK. This isn’t to romanticise the period: I’d have plenty to say about anti-labour violence and US foreign policy at the time (that is, if anyone were interested). However, the American School of Thought demonstrates how a certain way of thinking can permeate society and business as a whole, and massively affect how the economy functions. Can you imagine such policies working these days, when the popular mentality is so against them? Surely, firms would lobby against – or find ways around – attempts to reestablish such a system.
Another example is in Japan, where they had different ideas. The Japanese firm is a highly collective organisation, one which is loyal to its employees, and in turn has this loyalty reciprocated. Firms generally offer workers ‘lifetime employment’, coupled with numerous benefits such as insurance, pensions and promises of progression, based mostly on seniority. Achievements are shared collectively, and many companies even require employees to sing a ‘company song’. Getting a job at a major firm requires that one goes through a rigorous army-esque training program, and is a major lifetime achievement, to the extent that it is not uncommon for those who accomplish the feat (or don’t) to be reduced to tears. From a certain perspective, this approach might seem quite rigid and inflexible for both workers and firms, but it has certainly produced results: successful firms like Sony and Nintendo; low unemployment despite macroeconomic weakness, security for a large amount of the population, even with relatively low government spending.
There are numerous – indeed, surely countless – other ways to organise a firm based on a people’s worldview, national identity and so forth. Germany has its stakeholder model, where union leaders sit on board meetings and have a say in how the company is run; in turn, however, they are willing to go against their immediate interests by holding wages down to maintain national competitiveness. In countries such as India, the nature of the workplace is intertwined with religious ritual, something firms must consider in how they run their businesses. The rise in worker owned coops in Argentina and across the western world, with 48,000 in the US alone, indicates a growing number of people who share their own, democracy based ideas about the best way to organise business and treat employees.
One implication of the ideology theory is that, contrary to the Reaganite idea that 1980s ‘neoliberal’ reforms simply unleashed business to its true calling, it could be that the decade just instilled them with a certain mentality, one no more special than any other. This ideology was a more ruthless, ‘profit (shareholders) first’ mantra: firms merged, outsourced and became less tolerant of unions. While it is true that these things were accompanied and enabled by changes in the law and technology, the decade as a whole it also seemed put a lot of things, particularly mergers, in vogue: evidence is quite consistent with the idea that mergers were mostly driven by hubris. Similarly, outsourcing has come under fire after it has emerged that there are many hidden communication, management and transaction costs that were not first realised, and hence it may not be as profitable as first thought. Is this uncertainty the mark of firms which have a clear aim and know how to go about it, or which seem largely motivated by fads and unaware of the exact results of the actions?
One last example of how people’s perceptions can have a large influence on the economy may come from the UK. Here, the government’s recent policy of austerity has meant that public sector workers have faced massive cuts. Naturally, the government and press have justified this by appealing to the idea that there is a lot of excess waste in the public sector: pointless, lazy bureaucrats and so forth. Meanwhile, the private sector has failed to step up and fill the gap in employment. Interestingly, a survey provided some insight into why – aside from general macroeconomic weakness – this may be the case: 57% of private sector employers said they were not interested in former public sector employees because they were “not equipped” for the job, based simply on the fact that they were employed by the public sector. In other words, the general impression, fostered by the political class, that public sector workers are useless – false though it may be – has backfired by changing business’ impression of them, reducing hires.
In sum, it seems how businesses are run is substantially dependent on ideas, and hence can be a political choice. Cries that businesses should be more “socially responsible” may sometimes seem repetitive and empty, but history shows us that it is possible to manoeuvre the way businesses operate as a whole. Business’ ideology is also an interesting area of exploration for economic theory: instead of having businesses driven by maximising some goal, they could be driven by a certain set of principles (I expect there are some papers that deal with this, though perhaps not in the way I’d like). In any case, anyone trying to legitimise whatever way business happens to be behaving right now as ‘natural’ should take another look at the history of the firm.
There was a brief but interesting conversation on my post on the neutrality of money, between me and commenters Blue Aurora and Dinero, centering around the Real Bills Doctrine (RBD). I had not really looked into the RBD in too much depth before, but it seems like a natural ally of endogenous money (and MMT) theory, and it adds a lot of insights that, in my opinion, the Quantity Theory of Money (QToM) lacks.
The RBD comes in different flavours, but my reading of the modern version, popularised by Mike Sproul, is that RBD states the value of money is determined not by the amount of it in circulation (as in the QToM), but by the value of the asset the money is backed by. If a currency is tied to a gold standard, its value is determined by the convertibility rate of said currency to gold. If a currency is fiat, its value is determined by the assets of the bank that issued it. The value of a newly created loan is determined by the future goods and services generated by the borrower using said loan. Furthermore, RBD implies there is no real distinction to be made between various financial assets and money, as they are all claims on some real asset: the value of stocks, for example, changes with the value of a company, not when new stock is issued (though speculation obviously plays a role here).
Discerning which theory is ‘true’ can be difficult, as there is in some senses a large overlap between the RBD and QToM: in both cases, if the money supply expands without a corresponding increase in value/ wealth, money loses its value through inflation. Despite this “observation equivalence” between the two theories, Thomas Cullingham has tried to test which one is true by seeing whether it is the ‘backing’ of money or its quantity that have the biggest impact on the price level, and he found that it was the former. Furthermore, the RBD implies central banks should passively provide money based on the economy’s needs, which is consistent with endogenous money theory and the failure of monetarism. Finally, the RBD is consistent with the idea that hyperinflation is not a monetary phenomenon, but is instead determined by loss of confidence in a nation’s assets and economy due to political instability.
Oddly, the RBD is consistent with a fairly mainstream economist’s story of monetary policy: Paul Krugman’s babysitting coop. The members of the coop exchanged vouchers worth one night’s babysitting, but found themselves in a quasi-recession as nobody was willing to part with their vouchers. The solution was to increase the money supply, but this didn’t result in any change in the ‘worth’ of the vouchers. Those who object that this story is an exception because the value of a voucher was ‘fixed’ should answer the question: by what, exactly? Social conventions, confidence? Because these things are true of large amount of wealth in the real economy, too.
The RBD implies that many financial assets are speculative in nature, as their value depends on future flows of goods and services. Hence, if loans are issued for ‘speculation’, but with no expansion of goods or services, it will cause asset inflation. This wouldn’t have the ‘even’ impact of Friedman’s helicopter analogy, but would primarily take place through inflation of whatever was speculated on, such as houses. Hence, the RBD implies that the primary way of regulating inflation is not through monetary policy but through regulation and management of credit and the financial sector.
The Labour Theory of Value (LTV) is one of probably only a handful of economic theories, along with Francois Quesnay’s Tableau Economique, which have actually been completely abandoned over the past couple of centuries. So, in the interests of combating blind Whiggery, allow me to revive it (maybe I’ll do Quesnay another day, though here’s a sort-of modern version). I’m not going to argue the LTV is necessarily correct; I am merely interested in clearing up some common misconceptions.
My initial reaction to the LTV was the same as almost everyone’s: hostility. Why is there even a need for such a thing? Has it not been discredited on many fronts: logically, empirically, ethically? Are there not ways to preserve the important aspects of Marx, while at the same time ditching his dated and irrelevant theory of value? And yet, after a while, the hostility fades as you realise that:
- You should never be hostile to a theory based only on its name and what other people have said about it, and
- The theory, properly understood, is valid and highly illuminating, and explains many real world phenomena.
One common source of confusion with the LTV is the lack of appreciation that it only applies under capitalism, when goods are produced with wage labour, for the purpose of sale (this is what makes them ‘commodities’). For this reason it doesn’t apply to, say, artifacts; this blog post; or house work. This historical specificity can be a problem for economists, even heterodox ones, as they are generally wont to find principles which extend across different times and societies. This, for example, was the chief problem with Arun Bose’s critique of the LTV, which argued that no matter how far back you go, you will always have a commodity residue embedded in the value of a current commodity, and so labour is not the only source of value. Bose failed to consider that if you go back far enough, you will not have ‘commodities’ but simply naturally occurring objects, or objects not produced for sale. Only when labour was applied to these for the purpose of sale was ‘value’ created in the Marxist sense of the word.
In a nutshell, Marx’s theory goes as follows: under capitalism, the value of commodities is determined by the “socially necessary” amount of labour required to produce them (‘variable capital’), plus the current necessary cost of the capital used up in production (‘constant capital’). Fixed capital, such as machines, adds value at the same rate it depreciates, while raw commodities are used up completely and so add all of their value. Labour is generally paid less than the value it adds, and therefore is the sole source of profit.
Here’s a brief mathematical example: say an hour of labour adds £1 of value, and a certain type of chair requires 8 hours of labour (‘labour-time’), uses £2 worth of wood and depreciates a saw worth £10 by 1/10th (i.e. after the saw is used 10 times it will break). It follows that, according to the LTV, the value of this chair is:
(1/10)*£10 + (8*£1) + £2 = £11
The only way the capitalist can make a profit is to pay the labourer less than the value he creates (for the most part, Marx suggested wages were determined by a social subsistence level). So if the wage is, say, £0.50, the capitalist will have £4 worth of profit. Contrary to what many think, this does not imply that capital-intensive industries will have lower rates of profit, as the rate of profit will tend to equalise between industries, ‘sharing out’ the total surplus value produced in the economy.
The qualifiers of “necessary” costs and “socially necessary” labour are also important. It’s logically possible that a madman could purchase a saw for £100 for some reason, or that a lazy labourer could take 10 hours to make the chair, but this would do nothing to alter the resultant value of the chair. Marx was concerned with general rules, not specific cases, which could obviously fluctuate wildly as they are based on human behaviour.
The main implication of this theory is that, since capitalists tend to use labour saving technology to increase productivity, over time they use relatively more constant capital – which cannot be a source of surplus – and this drives down the rate of profit: the Tendency of the Rate of Profit to Fall (TRPF). Though the first capitalist who uses the technology will be able to sell at the market price, and thus gain, once the technology is widely adopted, the value of the commodity will decrease – a ‘fallacy of composition’. Again, this may not be true in particular industries at any one time, but it holds true across the economy as a whole. The result will be intermittent crises as capitalists face lower profits and try to increase them by pushing down wages, devaluing their constant capital, or through technological progress. Marx never predicted capitalism would collapse in on itself, though he did suggest that the working class would revolt as their wages were pushed down.
Does the subjective theory of value (STV) ‘refute’ the LTV?
The basic point that ‘value is in the eye of the beholder’ is often thought to be where the debate ends. This is surprising, because it has little to do with Marx’s main theoretical implications, which as I have noted, concerned economy-wide trends. Marx was well aware that price and value may differ wildly based on monopoly, demand and other fluctuations, but considered it irrelevant to his theory of value. He never claimed to be able to predict the day-to-day movements of prices, and purported attempts to use the LTV to do this are erroneous.
So while it may well be true that the tastes of rich people propel the price of diamonds upwards (whether this is due to the fact that they ‘subjectively value’ diamonds higher than poor people or the fact that they’re rich is up for debate, but I digress), but according to the LTV, this imbalance between price and value must be offset somewhere else, by some other commodity selling below its value. The important thing for Marx was the aggregate equality of price and value*.
Neither were Marx or the classical economists unaware of the utility of commodities and the array of use they might be put to: they called this the ‘use-value‘ of a commodity, though they did think this was socially determined rather than subjective. In classical economics, use-value is not quantifiable: a commodity either is a use-value or it isn’t, and there’s no definitive way to gauge the ‘value’ of sitting on a chair. This means it is not possible for use-value to translate into prices, as use-values are incommensurable between commodities.
The inherently intangible nature of use-value caused Marx and other classical economists to ask: what is it that makes commodities expressible by the same yardstick (money) under capitalism? The answer was that commodities had a twin expression of value: exchange-value. A commodity needed a use-value to have an exchange-value (i.e. it needed to be useful to be worth anything), but the two types of value were not equivalent or even on the same plane. The classical economists decided that exchange-value was determined not by utility, but by the labour required to make a commodity. This is because labour was the common element between all commodities: even capital ultimately reduced to labour, making it the prime candidate for the determination of exchange-value**.
In fact, the subjective theory of value is in many ways a fairly crude attempt to combine use-value and exchange-value, and doesn’t really offer anything new compared to the classical theory. The only way the quantitative expression of subjective valuation can be determined is either circular, ‘revealed’ by purchasing decisions ex post, or in the case of neoclassical economics, completely deterministic based on how a model is constructed. Therefore, as far as market prices are concerned, the theory doesn’t have any more predictive power than simply saying ‘use-value cannot be formalised’. Furthermore, though mutually beneficial trade is achieved through exchange of use-values, as far as exchange-values go trade is a zero-sum game: money exchanged must sum to zero. Neoclassical models of utility obscure this fact.
The ‘transformation problem’ and all that
Hopefully, I have cleared up some of the qualitative misconceptions surrounding the LTV. However, the critique which has probably done the most to ‘discredit’ Marx in academic circles is the idea that the maths simply doesn’t add up, usually based on ‘physicalist’ or Sraffian critiques (in fact, Paul Samuelson relied on this to steer his students away from Marx, despite rejecting Sraffian economics elsewhere). I will avoid the maths here and just try to get to the crux of the misconceptions, which is actually quite simple to do: once the basic methodological misinterpretations are highlighted, the purported complications simply disappear.***
In physicalist/Sraffian models, key variables such as prices and the rate of profit are all determined physically (generally by technology & distribution). The result is that the rates of value are superfluous and completely different to prices: when you try to ‘transform’ them, you run into problems. Yet this only really renders Marx logically inconsistent by interpreting him in a manner that…renders him logically inconsistent. For while physicalist models determine output and input prices simultaneously, Marx actually modeled them temporally, so they could easily differ. As the two prices depend on different transactions at different points in time, this is a fairly reasonable modelling tool by anyone’s standards, but it seems to have been lost in the wilderness of economic debate.
The root of this fundamental incompatibility is that the physicalist notion that key variables are determined simultaneously completely contradicts one of Marx’s premises: that value is determined by labour-time. Intuitively, this makes sense: in a simultaniest world that doesn’t really have time, how could the time spent labouring have any relevance? To show the inconsistency more rigorously, determination of value by labour-time implies that value will fall as productivity rises, which implies that the rate of profit will fall relatively in value terms compared to physical terms (more will be produced, but it will be worth less). Hence, the physical rate of profit – determined simultaneously by the parameters of the model (technology, distribution) – and the value rate of profit, determined by labour-time, differ, and physicalism is incompatible with Marxism.
Once we allow value to be determined by labour-time, and therefore allow output and input prices to differ, a myriad of supposed refutations of the LTV fail: the Okishio theorem; Ian Steedman’s Marx after Sraffa; V. K. Dmitriev’s labourless theory of value (unavailable online). The temporal method can also be combined with the ‘single system’ interpretation of the LTV, which suggests that instead of having two separate systems of price and value, as so many critics do, the two are linked: the necessary cost of inputs at the time of purchase is equal to the sum of the value transferred in production.
Thus, the most plausible mathematical interpretation of Marx’s LTV is the Temporal Single-System Interpretation, which I find a valid and illuminating way of modelling production. The basic elucidation of the theory, and the relationship between values and prices, simply becomes a lot less complicated, and the alleged ‘transformation problem‘ loses its venom as prices and values interact and adjust temporally.
There are a myriad of ways one can object to the LTV, but the idea that is is nonsensical and incoherent is simply based on misunderstandings. One may well disagree with the premise that labour is the source of value (I do, simply because I have no positive reason to believe it). One may also endorse alternative theories over the LTV. But, based on a clear understanding, there is no a priori reason not to develop a comprehensive understanding of Marx’s theory, and treat it in the same way one would treat any other theory in economics.
*It was these appeals to aggregates and totals, instead of the immediate behaviour of the system, that led Bohm-Bawerk to term Marx’s theory ‘tautological’. Yet this rests on Bohm-Bawerk’s own premise that money is neutral, which is controversial to say the least. In any case, Marx’s theory has clear, non-tautological implications, such as the TRPF.
**I find this explanation unsatisfying as labour as well as capital is heterogeneous. Marxists reply to this by arguing that labour under capitalism shares a common element: abstract labour, performed specifically to create commodities. This is partially convincing, but it doesn’t alter the main fact that different types of labour are highly disparate in nature.
***In this section I am drawing heavily on Andrew Kliman’s ‘Reclaiming Marx’s Capital: A Refutation of the Myth of Inconsistency‘ . It’s a great book: clear and concise, and a great example of a ‘remorseless logician’. I am, however, undecided on whether he started with a mistake and ended up in bedlam.