Posts Tagged Cambridge Capital Controversies

Yes, The Cambridge Capital Controversies Matter

I rarely (never) post based solely on a quick thought or quote, but this just struck me as too good not to highlight. It’s from a book called ‘Capital as Power’ by Jonathan Nitzan and Shimshon Bichler, which challenges both the neoclassical and Marxian conceptions of capital, and is freely available online. The passage in question pertains to the way neoclassical economics has dealt with the problems highlighted during the well documented Cambridge Capital Controversies:

The first and most common solution has been to gloss the problem over – or, better still, to ignore it altogether. And as Robinson (1971) predicted and Hodgson (1997) confirmed, so far this solution seems to be working. Most economics textbooks, including the endless editions of Samuelson, Inc., continue to ‘measure’ capital as if the Cambridge Controversy had never happened, helping keep the majority of economists – teachers and students – blissfully unaware of the whole debacle.

A second, more subtle method has been to argue that the problem of quantifying capital, although serious in principle, has limited practical importance (Ferguson 1969). However, given the excessively unrealistic if not impossible assumptions of neoclassical theory, resting its defence on real-world relevance seems somewhat audacious.

The second point is something I independently noticed: appealing to practicality when it suits the modeller, but insisting it doesn’t matter elsewhere. If there is solid evidence that reswitching isn’t important, that’s fine, but then we should also take on board that agents don’t optimise, markets don’t clear, expectations aren’t rational, etc. etc. If we do that, pretty soon the assumptions all fall away and not much is left.

However, it’s the authors’ third point that really hits home:

The third and probably most sophisticated response has been to embrace disaggregate general equilibrium models. The latter models try to describe – conceptually, that is – every aspect of the economic system, down to the smallest detail. The production function in such models separately specifies each individual input, however tiny, so the need to aggregate capital goods into capital does not arise in the first place.

General equilibrium models have serious theoretical and empirical weaknesses whose details have attracted much attention. Their most important problem, though, comes not from what they try to explain, but from what they ignore, namely capital. Their emphasis on disaggregation, regardless of its epistemological feasibility, is an ontological fallacy. The social process takes place not at the level of atoms or strings, but of social institutions and organizations. And so, although the ‘shell’ called capital may or may not consist of individual physical inputs, its existence and significance as the central social aggregate of capitalism is hardly in doubt. By ignoring this pivotal concept, general equilibrium theory turns itself into a hollow formality.

In essence, neoclassical economics dealt with its inability to model capital by…eschewing any analysis of capital. However, the theoretical importance of capital for understanding capitalism (duh) means that this has turned neoclassical ‘theory’ into a highly inadequate took for doing what theory is supposed to do, which is to further our understanding.

Apparently, if you keep evading logical, methodological and empirical problems, it catches up with you! Who knew?

Advertisements

, , ,

63 Comments

The DSGE Dance

Something about the way economists construct their models doesn’t sit right.

Economic models are often acknowledged to be unrealistic, and Friedmanite ‘assumptions don’t matter‘ style arguments are used to justify this approach. The result is that internal mechanics aren’t really closely examined. However, when it suits them, economists are prepared to hold up internal mechanics to empirical verification – usually in order to preserve key properties and mathematical relevance. The result is that models are constructed in such a way that, instead of trying to explain how the economy works, they deliberately avoid both difficult empirical and difficult logical questions. This is particularly noticeable with the Dynamic Stochastic General Equilibrium (DSGE) models that are commonly employed in macroeconomics.

Here’s a brief overview of how DSGE models work: the economy is assumed to consist of various optimising agents: firms, households, a central bank and so forth. The behaviour of these agents is specified by a system of equations, which is then solved to give the time path of the economy: inflation, unemployment, growth and so forth. Agents usually have rational expectations, and goods markets tend to clear (supply equals demand), though various ‘frictions’ may get in the way of this. Each DSGE model will usually focus on one or two ‘frictions’ to try and isolate key causal links in the economy.

Let me also say that I am approaching this issue tentatively, as I in no way claim to have an in depth understanding of the mathematics used in DSGE models. But then, this isn’t really the issue: if somebody objects to utility as a concept, they don’t need to be able to solve a consumer optimisation problem; if someone objects to the idea that technology shocks cause recessions, they don’t need to be able to solve an RBC model. To use a tired analogy, I know nothing of the maths of epicycles, but I know it is an inaccurate description of planetary rotation. While there is every possibility I’m wrong about the DSGE approach, that possibility doesn’t rest on the mathematics.

Perverse properties?

DSGE has been around for a while, and along the line several ‘conundrums’ or inconsistencies have been discovered that could potentially undermine the approach. There are two main examples of this, both of which have similar implications: the possibility of multiple equilibria and therefore indeterminacy. I’ll go over them briefly, although won’t get into the details.

The first example is the Sonnenschein-Mandel-Debreu (SMD) Theorem. Broadly speaking, this states that although we can derive strictly downward sloping demand curves from individually optimising agents, once we aggregate up to the whole economy, the interaction between agents and resultant emergent properties mean that demand curves could have any shape. This creates the possibility of multiple equilibria, so logically the system could end up in any number of places. The SMD condition is sometimes known as the ‘anything goes’ theorem, as it implies that an economy in general equilibrium could potentially exhibit all sorts of behaviour.

The second example is capital reswitching, the possibility of which was demonstrated by Piero Sraffa in his Magnum opus Production of Commodities by Means of CommoditiesThe basic lesson is that the value of capital changes as the distribution (between profits and wages) changes, which means that one method of production can be profitable at both low and high rates of interest, while another is profitable in between. This is in contrast to the neoclassical approach, which suggests that the capital invested will increase (decrease) as the interest rate decreases (increases). The result is a non-linear relationship, and therefore the possibility of multiple equilibria.

That these issues could potentially cause problems is well known, but economists don’t see it as a problem. Here is an anonymous quote on the matter:

We’ve known for a long time one can construct GE models with perverse properties, but the logical possibility speaks nothing about empirical relevance. All these criticisms prove is that we cannot guarantee some properties hold a priori – but that’s not what we claim anyway, since we’re real economists, not austrian charlatans. Chanting that sole logical possibility of counterexamples by itself destroys large portions of economic theory is just idiotic.

As it happens, I agree: based on available evidence, neither reswitching nor the SMD theorem are empirically relevant. For everyday goods, it is reasonable to suppose that demand will rise as price falls, and vice versa.  Firms also rarely switch their techniques in the real world (though reswitching isn’t the main takeaway of the capital debates). So the perspective expressed above seems reasonable – that is, until we stop and consider the nature of DSGE models as a whole.

For the fact is that DSGE models themselves are not “empirically relevant”. They assume that agents are optimising, that markets tend to clear, that the economy is an equilibrium time path. They use ‘log linearisation’, a method which doesn’t even pretend to do anything other make the equations easier to solve by forcibly eliminating the possibility of multiple equilibria. On top of this, they generally display poor empirical corroboration. Overall, the DSGE approach is structured toward preserving the use of microfoundations, while at the same time invoking various – often unrealistic – processes in order to generate something resembling dynamic behaviour.

Economists tacitly acknowledge this, as they will usually say that they use this type of model to highlight one or two key mechanics, rather than to attempt to build a comprehensive model of the economy. Ask an economist if people really maximise utility; if the economy is in equilibrium; if markets clear, and they will likely answer “no, but it’s a simplification, designed to highlight problem x”. Yet when questioned about some of the more surreal logical consequences of all of the ‘simplifications’ made, economists will appeal to the real world. This is not a coherent perspective.

Some methodology

Neoclassical economics uses an ‘axiomatic deductive‘ approach, attempting to logically deduce theories from basic axioms about individual choice under scarcity. Economists have a stock of reasons to do this: it is ‘rigorous’; it bases models on policy invariant parameters; it incorporates the fact that the economy ultimately consists of agents consciously making decisions, etc. If you were to suggest internal mechanics based on simple empirical observations, conventional macroeconomists would likely reject your approach.

Modern DSGE models are constructed using these types of axioms, in such a way that they avoid logical conundrums like SMD conditions and reswitching. This allows macroeconomists to draw clear mathematical implications from their models, while the assumptions are justified on the grounds of empiricism: crazily shaped demand curves and technique switching are not often observed, so we’ll leave them out. Yet the model as a whole has very little to do with empiricism, and economists rarely claim otherwise. What we end up with is a clearly unrealistic model, constructed not in the name of empirical relevance or logical consistency, but in the name of preserving key conclusions and mathematical tractability. How exactly can we say this type of modelling informs us about how the economy works? This selective methodology has all the marks of Imre Lakatos’ degenerative research program.

A consequence of this methodological ‘dance’ is that it can be difficult to draw conclusions about which DSGE models are potentially sound. One example of this came from the blogosphere, via Noah Smith. Though Noah has previously criticised DSGE models, he recently noted – approvingly – that there exists a DSGE model that is quite consistent with the behaviour of key economic variables during the financial crisis. This increased my respect for DSGE somewhat, but my immediate conclusion still wasn’t “great! That model is my new mainstay”. After all, so many DSGE models exist that it’s highly probable that some simplistic curve fitting would make one seem plausible. Instead, I was concerned with what’s going on under the bonnet of the model – is it representative of the actual behaviour of the economy?

Sadly, the answer is no. Said DSGE model includes many unrealistic mechanics: most of the key behaviour appears to be determined by exogenous ‘shocks’  to risk, investment, productivity etc without any explanation. This includes the oft-mocked ‘Calvo fairy’, which imitates sticky prices by assigning a probability to firms randomly changing their prices at any given point. Presumably, this behaviour is justified on the grounds that all models are unrealistic in one way or another. But if we have constructed the model to avoid key problems – such as SMD and reswitching, or by log-linearising it – on the grounds that the problems are unrealistic, how can we justify using something as blatantly unrealistic as the Calvo fairy? Either we shed a harsh light on all internal mechanics, or on none.

Hence, even though the shoe superficially fits this DSGE model, I know that I’d be incredibly reluctant to use it if I were working at a Central Bank. This is one of the reasons why I think Steve Keen’s model – which Noah Smith has chastised – is superior: it may not exhibit behaviour that closely mirrors the path of the global economy from 2008-12, but it exhibits similar volatility, and the internal mechanics match up far more nicely than many (every?) neoclassical model. It seems to me that understanding key indicators and causal mechanisms is a far more modest, and credible, claim than being able to predict the quarter-by-quarter movement of GDP. Again, if I were ‘in charge’, I’d take the basic Keensian lesson that private debt is key to understanding crises over DSGE any day.

I am aware that DSGE and macro are only a small part of economics, and many economists agree that DSGE – at least in its current form – is yielding no fruit (although these same economists may still be hostile to outside criticism). Nevertheless, I wonder if this problem extends to other areas of economics, as economists can sometimes seem less concerned with explaining economic phenomena than with utilising their preferred approach. I believe internal mechanics are important, and if economists agree, they should expose every aspect of their theories to empirical verification, rather merely those areas which will protect their core conclusions.

, , ,

69 Comments

Debunking Economics, Part XVII: Response to Criticisms (1/2)

Naturally, mainstream economists have been critical of Steve Keen’s Debunking Economics. I will do a brief series within a series to try and respond to some of these criticisms. In this part, I will respond to some of the main critiques of neoclassical theory that have generated controversy: demand curves, supply curves and the Cambridge Capital Controversies. In the next post, I will respond to criticisms of Keen’s own models and his take on the LTV, as well as anything else that has attracted criticism.

Note that this post will assume prior knowledge of Keen’s arguments, so if you haven’t yet read my summaries above (or better still, Keen’s book), then do it now.

Demand Curves

It seems there are some problems in this chapter. Keen mixes up some concepts and misquotes Mas-Collel. Having said that, he is broadly right. This is frustrating for someone on his ‘side,’ because it means mainstream economists can dismiss him when they shouldn’t.

Keen presents a quote from Mas-Colell where he assumes a benevolent dictator redistributes income prior to trade, and asserts that this assumption serves to ensure market demand curves have the same properties as individual ones. In fact, Mas-Colell is using this assumption to ensure that a welfare function, not a price relationship, will be satisfied. It remains true that a PHD textbook still assumes a benevolent dictator redistributes resources prior to trade, and subsequent economists have also used this assumption, which is not a great indicator of the state of economics. However, it was not an assumption used to overcome the Sonnenschein-Mantel-Debreu conditions.

More importantly (wonkish paragraph), it seems Keen lost some nuance in the translation of his critique to layman’s terms. He spends a lot of time talking about the Gorman polar form. This is about the existence of a representative consumer for a set of indirect utility functions (‘indirect’ because it calculates utility without using the quantities of goods consumed), but Keen makes out it is about the aggregation of preferences required for demand curves. Gorman is in many ways similar to, but not relevant to, the discussion of the aggregation of demand curves. Keen also argues that consumers having identical preferences is the same as them being one consumer, but this needn’t be the case: just because you and I have the same preferences, doesn’t make us the same person

Despite this, the competing wealth and substitution effects do create the conditions described by Keen. However, they only apply under general equilibrium – under which wealth effects are present – and not partial equilibrium – under which they are assumed away. Keen does not distinguish between the two.

In summary, Keen is correct that neoclassical economists could not rigorously ‘prove’ the existence of downward sloping demand curves. Keen himself says that it is reasonable to assume that demand will go down as price does, and classical economists were also content with this an observed empirical reality. Neoclassical economists themselves ended up having to defer to empirical reality when faced with the SMD conundrum and thus they had gained no insight beyond the classical economists, except to prove that their preferred technique – reductionism – does not work. For this reason, I interpret the SMD conditions primarily as a demonstration of the limits of reductionism (though some fellow heterodox economists might disagree).

Supply Curves

The proposition here is pretty simple: a participant in perfect competition will have a tiny effect on price. This is small enough to ignore at the level of the individual firm, which is neoclassical economist’s main defence. However they ignore that, as Keen says, the difference is both “subtle and the size of an elephant.” Once you aggregate up a group of infinitesimally small firms making incredibly small deviations from maximising profits, you get a result that is far away from the one given by the neoclassical formula. Result? We must know the nature of the MC, MR and demand curves to know both price and quantity, just as with a monopoly. The neoclassical theory – at this level – has no reason to prefer perfect competition to a monopoly, and a supply curve cannot be derived. From what I’ve seen, the critics ignore the effect of adding up tiny mistakes, instead focusing on how tiny they are on an individual level.

Economists have some other defences, but I interpret them as own goals. For example, there is the argument that, under perfect competition, firms are price takers by assumption. They cannot have any effect on price, by assumption. But this basically amounts to assuming the price is set by an exogenous central authority, which is odd for a model of perfect competition.

Another argument is that setting MC=MR itself is an assumption. This is a strange path to take for a theory that prides itself on internal consistency and profit maximisation. It acknowledges that MC=MR will not quite maximise profits, so amounts to the assumption that firms are not profit maximisers. There is also the similar argument that firms don’t take Keen’s problems into consideration in real life, so they don’t matter. This is a huge own goal, given most textbooks argue that it doesn’t matter what firms do in real life. I’m quite happy to acknowledge it does matter how they actually price – but that would involve abandoning the marginalist theory of the firm and using cost-plus pricing.

So, now that we have all finished discussing how many angels can dance on a pinhead (turns out it was slightly fewer than economists thought), let’s just start using more realistic theories of the firm and forget the mess that is marginalism.

Cambridge Capital Controversies

There are swathes of literature on this and I cannot hope to explore them all. The main thing I have noticed, and want to discuss, is that economists only seem to focus on capital reswitching when discussing this, and defer to empirical evidence to suggest it is negligible. I have a few problems with this:

(1) Empirical evidence is competing and some evidence suggests reswitching is more common than economists would like to think. Furthermore, it is incredibly hard to observe and therefore cannot be dismissed so easily.

(2) Most importantly, the Capital Controversies were not just, or primarily, about reswitching. Sraffa showed a number of things: demand and supply are not an adequate explanation for static resource allocation; the distribution between wages, profits and other returns must be known before prices can be calculated; factors of production cannot be said to be rewarded according to ‘marginal product’. For me these are more important, and are applicable to many models used today, such as Cobb-Douglas and other production functions, and the Solow Growth model.

With all 3 of the examples I have discussed, economists have tried to defer to empirical evidence to dismiss the problems with their causal mechanics. But generally economists do not regard empirical evidence about causal mechanics as important (the primary example being the theory of the firm), instead insisting on rigorous logical consistency. Surely, in order to be completely logically consistent, economists should at least be willing to experiment with the potential effects of SMD and reswitching in general equilibrium models and see what happens? Robert Vienneau has various discussions of this.

The common thread between these is that economists seem incredibly adept at assuming their conclusions. Of course, you can get around any critique with an appropriate assumption, but as I’ve discussed, theories are only as good as their assumptions and assumptions should not be used simply to protect core beliefs and come to palatable conclusions. Having said that, Keen’s book isn’t perfect (which is to be expected if you try and take every aspect of economics on in one book), and there are worthwhile criticisms out there. Nevertheless, Keen’s critique as a whole remains in tact, and leaves very little of what is taught on economics courses left in its wake.

P.S. Feel free to use the comments space to discuss any critiques of areas I have not covered/said I will cover.

, , , , ,

24 Comments

On Production, Capital and Aggregation

I have never thought of the macroeconomic production function as a rigorously justifiable concept. … It is either an illuminating parable, or else a mere device for handling data, to be used so long as it gives good empirical results, and to be abandoned as soon as it doesn’t, or as soon as something else better comes along.

– Robert Solow

When speaking about production and output, economists generally refer to ‘factors of production;’ things are inputted into the production process to produce something else. Most of the time, they use the two factors ‘capital’ and ‘labour.’ They are a firm’s presumed inputs in theories of the firm and supply curves, where a firm takes their values as inputs and, after some mathematical manipulation, produces a certain amount of output. They are also used in a macroeconomic model known as a ‘production function,’ which does something similar for the entire economy. There are various different production functions that use different maths, and include other variables such as technology or productivity – the most famous one is known as Cobb-Douglas.

The problem with this form of estimation is that it has long been known to be logically questionable. Anyone who has taken a science class past a basic level will know that checking your units – that they are consistent and balance out on both sides of the equation – is emphasised repeatedly. But this seems to be thrown out of the window in the basic analysis of production functions and firm behaviour.

The analysis of production takes two physical inputs – most likely capital and labour. Generally, the inputs are also assumed to be clay-like; available in infinitely small quantities. The inputs are combined (as far as I can see, this means flung together inside a black box) and produce a physical output of some other good, which is of course also infinitely divisible and clay-like. Labour is measured in terms of hours of work; capital in terms of money. This is where the problems start.

The Cambridge Capital Controversies revealed many problems with using a monetary value to measure capital equipment, certainly within a theory of distribution. However, there is another, far more simple and perhaps more fundamental objection: by definition, we are supposed to be measuring physical units of input. This means it is simply not coherent to measure in terms of cost. If we were to opt for measuring in terms of cost as a rule, then what would be the justification for not lumping labour in with capital, and just having a single input, perhaps labelled ‘stuff’? The answer is the justification for not doing the same with capital.

If we decide to use physical inputs, it seems there are ways around the problem. Instead of labelling one input ‘capital,’ we could consider a certain type of capital good – say, shovels with which to equip some ditch-digging labourers. It is fair to assume these are roughly the same and so we can add them up. However, this method lays bare problems that the blanket term ‘capital’ previously obscured.

First, we clearly need more than just people and shovels to dig a ditch. We might need wheelbarrows, land, a skip, sustenance for the labourers, transport for labourers, perhaps a supervisor – in fact, there is potentially an incredibly large amount of factors of production, something I’ve noted before. It becomes computationally difficult or even impossible to include everything that contributes to production, and some factors will simply be immeasurable.

Second, it is clear that these objects are not perfectly divisible. In the examples of ‘capital’ and ‘labour,’ we could divide both money and labour time into infinitely small units. But once we allow for production being ‘lumpy,’  functions are no longer smooth and differentiable, and as such marginal productivities simply do not make sense.* Furthermore, this belies the idea of an elasticity of substitution – the rate at which you can substitute one input for the other – since taking away a ‘lump’ will simply make output fall to zero (this is also something I’ve touched on before).

Economists will likely have various rebuttals to this style of thinking. The most used will be that Cobb-Douglas and various theories of the firm make good, testable predictions. But actually their predictions leave a lot to be desired – firms do not behave how economists predict, and the Cobb-Douglas production function has poor empirical results (economists generally refer to the initial estimations made by the creators of the model, but things have changed since then).

The other defense will be similar but not quite the same: it is just a simplification, used to illuminate a particular aspect of a problem. Well, the fact is that making counterfactual assumptions about the nature of a system does not illuminate anything; it simply tells us about a different universe. Furthermore, simplifications cannot be internally consistent. Even within the logic of ‘labour’ and ‘capital,’ it has been shown repeatedly that the conditions under which either of them can be aggregated are incredibly stringent. Similar arguments apply to other aggregate parameters used by economists, such as aggregate measures of technology or productivity.

Simple macroeconomic production functions smack of trying to turn macro into ‘applied microeconomics.‘ But it has repeatedly been shown that aggregation problems will always be present, and that it is best to study emergent phenomena rather than try extrapolate microeconomic parameters until they have no real meaning. At the other end, microeconomic production is just an attempt to reduce everything to ‘rigorously’ derived smoothly differentiable intersecting lines, rather than simply accepting empirical realities about firms and micro behaviour, and opening up the firm to see what happens inside instead of treating it as a black box.

Overall, it seems the whole idea of production functions and factors of production as anything other than vague, qualitative concepts is something of a dead end.

*I similarly expect that, once we allow that preferences may be lumpy, utility functions are no longer smooth. But lumpy preferences is something for another time.

, , ,

12 Comments

Why Prefer Preferences?

Nick Rowe offers a summary of the Cambridge Capital Controversies that, though it is tongue in cheek and should not be taken too seriously, substantively leaves a lot to be desired. He states that the debate started because “some economists in Cambridge UK wanted to explain prices without talking about preferences.” This is false – the debate started because Joan Robinson and Piero Sraffa took issue with a production function that used an aggregate capital stock k, measured in £, with a marginal productivity. However, despite the faulty summary of the controversies, and to Rowe’s credit, some good discussion followed in the comments.

Sraffa built up an entire model just to critique neoclassical theory. It followed neoclassical logic, but replaced the popular measure of capital with a more consistent one: summing up the labour required to produce it, and the profit made from it. His model of capitalism started with simplistic assumptions, but increased in complexity. Within the confines of his own model, he showed several things: the distribution between wages and profits must be known before prices can be calculated; demand and supply are not an adequate explanation of prices, and the rate of interest can have non-linear effects on the nature of production. I cover this in more detail here.

Rowe’s primary criticism of Sraffa is that his model did not use preferences, which is a criticism also made by others.  But eliminating preferences is a neglibility assumption: we ignore some element of the system we are studying, in the hope that we can either add it later, or it is empirically negligible. As Matias Vernengo notes in the comments, Sraffa was deliberately trying to escape the subjective utility base of neoclassical economics in favour of the classical tradition of social and institutional norms, so he assumed preferences were given. This is just a ceteris paribus assumption, which economists usually love! In any case it turns out that preferences can be added to a Sraffian model, with many of the key insights still remaining. Indeed Vienneau’s model (and, apparently, the work of Ian Steedman, with whom I am unfamiliar) invoke utility maximisation and come to many of the same Sraffian conclusions about demand-supply being unjustified.

Rowe also criticises Sraffa’s approach because it puts production first, over the consumer sovereignty upon which neoclassical economics is built. Should preferences provide an explanation of decisions? It appears Rowe does not take seriously the ‘chicken and egg’ problem with neoclassical models – surely, production must occur first, yet models such as Arrow-Debreu take prices as a given for firms, before anything is made.

In a modern capitalist economy, it seems illogical to say that the demand for a particular good comes first, then the supply follows as firms passively try to accommodate it. If it were true, advertising wouldn’t exist, or would be incredibly limited. It is fair to say that, independently, people have a ‘preference’ (though I’d say instinct) for food, shelter, clothing, security and other creature comforts. However, demand for most goods and services beyond this is certainly generated by advertising, marketing and other exogenous factors – advertising and marketing are one of the two primary expansion constraints experienced by real world firms (the other is financing, which, incidentally, neoclassical models often assume away too, but I digress).

An alternative way to model human behaviour would be an institutional/social norm perspective: while people instinctively want to subsist, what exactly they choose to subsist on is in large part dependent on their surroundings. There is the example of tea consumption in Britain, which started as a luxury and took decades to filter down to the lower classes. Similarly, if I had been born in India, I would probably have more of a taste for spicy foods. It’s hard to deny these things are largely dependent on social surroundings, rather than individualistic consumer preferences. Similarly, Rowe’s focus on the time-preference explanation of the interest rate seems to ignore that this will be largely dependent on institutional factors such as the state of the economy.

From an individual perspective, perhaps Maslow’s Hierarachy is a useful way of understanding purchasing decisions: after people have obtained basic needs such as food and security, things they buy are to do with identity and emotion. Don’t believe me? These concepts are exactly what firms use to try to expand their market base (for a longer treatment, see Adam Curtis’ documentary). If people don’t buy products because firms associate them with ‘self actualisation,’ then firms are systemically irrational.

Overall, I don’t think there are there any cases in which we can evaluate individual’s preferences outside a social and institutional context. Sraffa considers the economy as a whole, and leaves subsequent questions about consumers to be answered later – which they have been. Conversely, putting preferences first and having firms passively accommodate their demand runs into several logical problems, and does not corroborate with what we know about both firms and people in the real world.

, , ,

29 Comments

Debunking Economics, Part V: The Holy War Over Capital

There are probably few criticisms of neoclassical economics that have been both so universally acknowledged to be valid, and yet so completely ignored, as the Cambridge Capital Controversy (CCC). Chapter 7 of Steve Keen’s Debunking Economics provides an overview of this debate about the nature of capital.

Basic economic analysis teaches that capital, like other factors of production, is paid in proportion to its productivity – the so called ‘Marginal Product of Capital (MPC),’ which is presumed to be equal to the rate of profit. Keen gives two good criticisms before he delves fully into the CCC:

First, the MPC assumes that other factor inputs are fixed when capital is employed, which leads to our first problem: since capital is (rightly) assumed to be the least variable input, any time period in which you can employ more capital is surely one in which you can employ more labour, too? Once again we are forced to face the reality that firms tend to vary all inputs employed at once.

Second, in an industry as broadly defined as ‘the capital market’ we run into familiar Ceteris Paribus problems, where varying inputs will create effects on wages and the existing capital stock that alter the rate of profit. For small and medium sized firms these effects will be negligible, but when analysing the biggest firms and entire industries the feedback between them will create collateral effects that undermine partial equilibrium methodology.

However, even ignoring these criticisms, there are serious issues with the neoclassical treatment of capital.

Capital is often measured in units. There are obvious problems with this: capital includes brooms, blast furnaces, buckets, string and potentially any commodity you care to think of, so a single unit of measurement is difficult to justify. Generally economists either leave capital in undefined units or measure it by price. The former treatment does not deserve to be criticised formally – something that poorly defined is, like utility, Not Even Wrong. As for the latter, Keen notes that there is an “obvious circularity” to the definition. The value of capital is based on the expected profit from it, which is partly based on the price of capital. Thus the use of price as a unit of measurement is not particularly enlightening.

Piero Sraffa’s Devastating Critique of the Neoclassical Treatment of Capital

As always, Piero Sraffa offered the most fully fleshed out and devastating critique of the neoclassical theory.

Sraffa proposed that, instead of treating one factor of production as a mysterious substance called ‘capital,’ we instead supposed that goods produce other goods, when combined with labour (hence the title of his Magnum Opus, Production of Commodities by Means of Commodities). He rigorously derived an internally consistent model with the sole aim of invalidating neoclassical economics on its own terms. There is some debate about the empirical applicability of his conclusions, but logic is sufficient to invalidate the neoclassical theories, which are based on the same premises.

Sraffa builds up a complex model step by step, starting simple. In the first statement of the model, there are a few firms, whose only inputs are the goods produced by other firms and themselves.  So firm A needs a certain amount of commodities x, y and z to produce commodity x, whilst firm B needs a different combination to produce commodity y, and firm C a different combination to produce commodity z. Each firm produces just enough of their respective commodities for economy-wide production to continue at the same level in the next period. Sraffa’s next step is to alter the model so that each firm produces more than they need to in order to continue production – a profit.

The first conclusion he comes to is that the relative production of factor inputs,and the rate of profit, is not based on supply and demand, but on ‘the conditions of production’ – the amount of inputs required to keep a firm or industry going.

Sraffa then explicitly incorporates labour into his model. He notes that wages are obviously an inverse function of profit: the higher they are, the lower profit will have to be, and vice versa. He then proposes a new method of measuring capital: treat it as the dated value of the labour required to produce it (wages), plus the profit made from it since it was produced, plus the value of the commodity that was combined with the labour to produce it. This ‘residual commodity’ can then be further reduced to labour times profit, plus another commodity, and so forth:

commodity a = ((labour input at time x)*((1+rate of profit)^(time periods since time x))) + commodity b

As Sraffa himself points out, there will always be residual commodity left out if you break down a commodity into the labour and commodity required to create it. However, as you do this again and again, the resultant term becomes smaller and smaller until it can be negated. This type of reasoning is far more scientific than the neoclassical approach and actually closely resembles the perturbation methods used by mathematicians and engineers, where a function is split into an infinite amount of terms of decreasing value, but only the first few are used in calculations.

In the equation above, there are two competing effects: profits and wages. As one rises, the other must decrease. It is easy to see in this in equation that there is a peak value for capital somewhere in the middle; either side of this the reduction in one term will overwhelm the other and the measured value of capital will decrease.

This creates an interesting phenomenon known as capital reswitching. Consider two production techniques, A and B, which involve inputting different amounts of labour at different times – a common example is creating wine through ageing it (A) or through a chemical process (B). A requires more labour input in the distant past; B requires more labour input in the near past. At a zero rate of profit, both techniques are identical. As the rate of profit rises, technique A, which relies on more distant, fewer labour inputs, will remain cheaper and therefore more viable. However, as the effect of the rising rate of profit compounds due to the time delay, technique A will become more and more expensive, and technique B will take over.*

The point of this approach is to show a few things:

(1) The value of capital varies depending on the rate of profit, as the rate of profit is a variable in the equation for measuring capital. Since the measured amount of capital depends on the rate of profit, profit cannot simply be said to be the ‘Marginal Product of Capital.’

(2) There is no easy to discern relationship between profitability and the amount of capital employed. Generally, neoclassical economics teaches that output is simply a concave but increasing function of the amount of capital employed, much like any other demand/utility curve. Capital reswitching destroys this idea.

(3) We cannot calculate prices without first knowing the distribution between wages and profits. The measured price of inputs depends on income distribution, not the other way round.

Many might be struck by the sheer level of abstraction in Sraffa’s approach. It’s worth noting that in Commodities, he adds many more levels of realism past those that Keen explores. But, as I said before, the basic point was taking on neoclassicism with its own logic, rather than presenting an alternative. By the end of the debate, Samuelson and Solow had both conceded that the criticisms were valid, and their models were wrong or incomplete.

Discussions of the CCC since then have tended to assume the standard neoclassical tactic of asserting the objections have been incorporated. But this stuff was 50 years ago. Why do undergraduate and postgraduate programs still teach concepts like the MPC? The Solow-Swan growth model, which depends on an aggregated capital stock K, subject to diminishing returns? As Robert Vienneau says, if neoclassicism were really revising itself to the extent that’s needed, we’d expect some of the modifications to filter down over time. But the fact is that they haven’t.

In fact, what seems to have happened is that economists have done a fairly typical dance – weaving between ‘that is unimportant’ and ‘that has been incorporated:’

Aggregative models were deployed for the purposes of teaching and policymaking, while the Arrow-Debreu model became the retreat of neoclassical authors when questioned about the logical consistency of their models. In this response, a harsh tradeoff between logical consistency and relevance was cultivated in the very core of mainstream economics.

This sort of evasiveness is common – there will always be a paper written recently that attempts to shoehorn any objection one cares to think of into the neoclassical paradigm. But these objections are incorporated one at a time, rarely find their way into the core teachings, and never involve questioning the foundations of neoclassicism on any substantive level. The reality is that, when the problems are as deep as the ones highlighted in the CCC, we need a meaningful overhaul rather than mere ad-hoc modifications.

*For those interested, the linked Wikipedia article has a fairly simple numerical example where the most effective method goes from A to B and back to A again.

, , , , ,

38 Comments