On Production, Capital and Aggregation

I have never thought of the macroeconomic production function as a rigorously justifiable concept. … It is either an illuminating parable, or else a mere device for handling data, to be used so long as it gives good empirical results, and to be abandoned as soon as it doesn’t, or as soon as something else better comes along.

– Robert Solow

When speaking about production and output, economists generally refer to ‘factors of production;’ things are inputted into the production process to produce something else. Most of the time, they use the two factors ‘capital’ and ‘labour.’ They are a firm’s presumed inputs in theories of the firm and supply curves, where a firm takes their values as inputs and, after some mathematical manipulation, produces a certain amount of output. They are also used in a macroeconomic model known as a ‘production function,’ which does something similar for the entire economy. There are various different production functions that use different maths, and include other variables such as technology or productivity – the most famous one is known as Cobb-Douglas.

The problem with this form of estimation is that it has long been known to be logically questionable. Anyone who has taken a science class past a basic level will know that checking your units – that they are consistent and balance out on both sides of the equation – is emphasised repeatedly. But this seems to be thrown out of the window in the basic analysis of production functions and firm behaviour.

The analysis of production takes two physical inputs – most likely capital and labour. Generally, the inputs are also assumed to be clay-like; available in infinitely small quantities. The inputs are combined (as far as I can see, this means flung together inside a black box) and produce a physical output of some other good, which is of course also infinitely divisible and clay-like. Labour is measured in terms of hours of work; capital in terms of money. This is where the problems start.

The Cambridge Capital Controversies revealed many problems with using a monetary value to measure capital equipment, certainly within a theory of distribution. However, there is another, far more simple and perhaps more fundamental objection: by definition, we are supposed to be measuring physical units of input. This means it is simply not coherent to measure in terms of cost. If we were to opt for measuring in terms of cost as a rule, then what would be the justification for not lumping labour in with capital, and just having a single input, perhaps labelled ‘stuff’? The answer is the justification for not doing the same with capital.

If we decide to use physical inputs, it seems there are ways around the problem. Instead of labelling one input ‘capital,’ we could consider a certain type of capital good – say, shovels with which to equip some ditch-digging labourers. It is fair to assume these are roughly the same and so we can add them up. However, this method lays bare problems that the blanket term ‘capital’ previously obscured.

First, we clearly need more than just people and shovels to dig a ditch. We might need wheelbarrows, land, a skip, sustenance for the labourers, transport for labourers, perhaps a supervisor – in fact, there is potentially an incredibly large amount of factors of production, something I’ve noted before. It becomes computationally difficult or even impossible to include everything that contributes to production, and some factors will simply be immeasurable.

Second, it is clear that these objects are not perfectly divisible. In the examples of ‘capital’ and ‘labour,’ we could divide both money and labour time into infinitely small units. But once we allow for production being ‘lumpy,’  functions are no longer smooth and differentiable, and as such marginal productivities simply do not make sense.* Furthermore, this belies the idea of an elasticity of substitution – the rate at which you can substitute one input for the other – since taking away a ‘lump’ will simply make output fall to zero (this is also something I’ve touched on before).

Economists will likely have various rebuttals to this style of thinking. The most used will be that Cobb-Douglas and various theories of the firm make good, testable predictions. But actually their predictions leave a lot to be desired – firms do not behave how economists predict, and the Cobb-Douglas production function has poor empirical results (economists generally refer to the initial estimations made by the creators of the model, but things have changed since then).

The other defense will be similar but not quite the same: it is just a simplification, used to illuminate a particular aspect of a problem. Well, the fact is that making counterfactual assumptions about the nature of a system does not illuminate anything; it simply tells us about a different universe. Furthermore, simplifications cannot be internally consistent. Even within the logic of ‘labour’ and ‘capital,’ it has been shown repeatedly that the conditions under which either of them can be aggregated are incredibly stringent. Similar arguments apply to other aggregate parameters used by economists, such as aggregate measures of technology or productivity.

Simple macroeconomic production functions smack of trying to turn macro into ‘applied microeconomics.‘ But it has repeatedly been shown that aggregation problems will always be present, and that it is best to study emergent phenomena rather than try extrapolate microeconomic parameters until they have no real meaning. At the other end, microeconomic production is just an attempt to reduce everything to ‘rigorously’ derived smoothly differentiable intersecting lines, rather than simply accepting empirical realities about firms and micro behaviour, and opening up the firm to see what happens inside instead of treating it as a black box.

Overall, it seems the whole idea of production functions and factors of production as anything other than vague, qualitative concepts is something of a dead end.

*I similarly expect that, once we allow that preferences may be lumpy, utility functions are no longer smooth. But lumpy preferences is something for another time.


, , ,

  1. #1 by tuigen on November 3, 2012 - 10:10 am

    Robert Solow’s comment on the macroeconomic production function – quoted at the header of your essay – as “a mere device for handling data, to be used so long as it gives good empirical results, and to be abandoned as soon as it doesn’t, or as soon as something else better comes along” sounds like Sir Karl Popper’s attitude to every theory, no matter what the topic studied:

    “So long as theory withstands detailed and severe tests and is not superseded by another theory in the course of scientific progress, we may say that it has ‘proved its mettle’ or that it is corroborated.”

    – from his book The Logic of Scientific Discovery, page 10 of the Routledge Classics edition.

    • #2 by Unlearningecon on November 4, 2012 - 3:32 pm

      Good point. I own TLSD but have not yet read it. Must get around to it.

  2. #3 by SR819 on November 3, 2012 - 5:27 pm

    That’s something I’ve never considered before when critiquing Economics, but now that you mention it it’s quite a glaring error. Especially if for the Cobb Douglas Production Function the exponents on Capital and Labour are different. How are you going to conduct any dimensional analysis then? Moreover, if you rearrange the equation and make A the subject of the formula (TFP), you end up with some strange units which highlights the incoherency of TFP.

    • #4 by Unlearningecon on November 4, 2012 - 3:31 pm

      Yeah so true. I’d have no idea what to call the units for TFP.

  3. #5 by Mick Brown on November 3, 2012 - 9:22 pm

    I’m baffled by the two comments so far (not because they are wrong, I just can’t understand what they are saying). I think they may be bricks in the wall of macroeconomics.
    I thought that the production function means that the cost of a good produced depends on the added-up costs of the inputs that go into producing it.
    From what I have observed (sorry, no data) the immediate costs are easy to calculate but but gradually calculations develop into guesswork. That is why the people employed in small firms to give prices are called ‘estimators’ rather than ‘calculators’. The most successful rely on experience, intelligence and a bit of luck.
    When it gets to whole countries using the same principle what happens, does a government employ a percentage of statisticians who can calculate (as a form of division of labour) plus a number of people who are good at guessing? If it employs estimators, as in the small firms, where do they get their experience from? Or what?

    • #6 by Unlearningecon on November 4, 2012 - 3:24 pm

      Well that’s a possible interpretation, but two things:

      (1) I have never had it explained to me as such. It has always been about the ‘amount’ of labour and capital you put in to get a certain ‘amount’ of an output.

      (2) As I said, if we are going to measure in terms of cost and then revenue, why not lump labour in with capital?

      I can’t say for certain about your answer, but my guess would be yes, central banks and the like employ statisticians.

  4. #7 by Mick Brown on November 5, 2012 - 9:33 am

    Thanks for putting me straight on this.
    I was mistakenly trying to relate an academic theory to my own practical experience of having been asked to devise a quality control system in a company I worked for that was losing money. As well as improving quality, we also found out that the main cause of losses was that sales management were not charging customers for an important input (digital services in print).
    The result was that they hated my system, bought an expensive MIS package – from what looked like gangsters to me – which was too complicated for them to understand and I was first out of the door when the company went into administration.

  5. #8 by Gavin Jackson (@GavinJackson7) on November 5, 2012 - 1:42 pm

    You raise some good points. This reminds me of one of my favourite papers; Growth Theory Through the Lens of Development Economics by Abhijit Bannerjee and Esther Duflo. They show in depth how the traditional neoclassical assumption of an aggregate production function fail

    It’s a seminal text written in 2004 when they were at MIT as professors of economics and has been read by everyone in economics who thinks in depth about macroeconomic growth so I’m not sure this has much relevance as a critique of economics.

    • #9 by Gavin Jackson (@GavinJackson7) on November 5, 2012 - 2:00 pm

      In the third sentence it should read ‘assumptions’ and there’s a missing full stop. Apologies.

    • #10 by Unlearningecon on November 5, 2012 - 9:17 pm

      Thanks for the hat tip, but why don’t you think it’s relevant? Seems so to me.

      • #11 by Gavin Jackson (@GavinJackson7) on November 6, 2012 - 1:07 pm

        Because there’s a difference between critiquing economic theory and critiquing economics. Economic theory can be continually critiqued and reevaluated while ‘economics’ as a discipline remains intact. Debating and critiquing economic theory is what economics is about.

        (Also just as a note, in more advanced models labour and capital are vectors of inputs. So you can have anything from 1 to n types of capital and labour. In some models of long run growth they explicitly model the firms whose job it is to produce intermediate inputs.)

      • #12 by Unlearningecon on November 7, 2012 - 12:57 pm

        Well, yes. This site could be better named ‘unlearning marginalism’ (though I do have other objections), but that wouldn’t be quite as catchy!

        Yeah, Varian uses that method in Microeconomic Analysis (though he then goes on to use Cobb-Douglas and other production functions with capital k and a marginal productivity). But as I say some ‘inputs’ are difficult or impossible to measure, even though they have a clear effect on production – management, knowledge. I guess you can take those as a given but you have to explore them properly somewhere.