Yes, The Cambridge Capital Controversies Matter

I rarely (never) post based solely on a quick thought or quote, but this just struck me as too good not to highlight. It’s from a book called ‘Capital as Power’ by Jonathan Nitzan and Shimshon Bichler, which challenges both the neoclassical and Marxian conceptions of capital, and is freely available online. The passage in question pertains to the way neoclassical economics has dealt with the problems highlighted during the well documented Cambridge Capital Controversies:

The first and most common solution has been to gloss the problem over – or, better still, to ignore it altogether. And as Robinson (1971) predicted and Hodgson (1997) confirmed, so far this solution seems to be working. Most economics textbooks, including the endless editions of Samuelson, Inc., continue to ‘measure’ capital as if the Cambridge Controversy had never happened, helping keep the majority of economists – teachers and students – blissfully unaware of the whole debacle.

A second, more subtle method has been to argue that the problem of quantifying capital, although serious in principle, has limited practical importance (Ferguson 1969). However, given the excessively unrealistic if not impossible assumptions of neoclassical theory, resting its defence on real-world relevance seems somewhat audacious.

The second point is something I independently noticed: appealing to practicality when it suits the modeller, but insisting it doesn’t matter elsewhere. If there is solid evidence that reswitching isn’t important, that’s fine, but then we should also take on board that agents don’t optimise, markets don’t clear, expectations aren’t rational, etc. etc. If we do that, pretty soon the assumptions all fall away and not much is left.

However, it’s the authors’ third point that really hits home:

The third and probably most sophisticated response has been to embrace disaggregate general equilibrium models. The latter models try to describe – conceptually, that is – every aspect of the economic system, down to the smallest detail. The production function in such models separately specifies each individual input, however tiny, so the need to aggregate capital goods into capital does not arise in the first place.

General equilibrium models have serious theoretical and empirical weaknesses whose details have attracted much attention. Their most important problem, though, comes not from what they try to explain, but from what they ignore, namely capital. Their emphasis on disaggregation, regardless of its epistemological feasibility, is an ontological fallacy. The social process takes place not at the level of atoms or strings, but of social institutions and organizations. And so, although the ‘shell’ called capital may or may not consist of individual physical inputs, its existence and significance as the central social aggregate of capitalism is hardly in doubt. By ignoring this pivotal concept, general equilibrium theory turns itself into a hollow formality.

In essence, neoclassical economics dealt with its inability to model capital by…eschewing any analysis of capital. However, the theoretical importance of capital for understanding capitalism (duh) means that this has turned neoclassical ‘theory’ into a highly inadequate took for doing what theory is supposed to do, which is to further our understanding.

Apparently, if you keep evading logical, methodological and empirical problems, it catches up with you! Who knew?

, , ,

  1. #1 by Tokyo Torquemada on February 4, 2014 - 12:30 am

    I think this is dangerous (but I do agree with their conclusion). A social construction of capital is interesting, and probably true. But it does not address the central problem of the non-commeansurability of `capital`, whether as a matter of c¥vocabulary, or as a matter of aggregation. The reality is that no one – or very few people – really know what they mean when they mouth the word `capital`.

    I made some remarks in this connection as invalidating the TFP approach to Japan`s slump at a recent seminar in Oxford, but, in the main, I fear that the ball went straight through to the `keeper except amongst those who were parti pris.

    • #2 by Unlearningecon on February 4, 2014 - 12:33 am

      I should add that the authors go on to build up their own theory of capital. However, I am not far enough through the book to critically evaluate it.

  2. #3 by Tokyo Torquemada on February 4, 2014 - 12:53 am

    I look forward to the further development of their perspective…. I do hope that they will understand the social position of the capitalist, most neatly encapsulated in Joan Robinson remark that : “It is precisely the pursuit of profit which destroys the prestige of the business man. While wealth can buy all forms of respect, it never finds them freely given. ” (EP, P 25). social power has multiple dimensions…….

  3. #4 by rabidaltruism on February 4, 2014 - 1:55 am

    My knowledge of the CCC is somewhat paper thin, but didn’t the Cambridge victory basically amount to constructing a counterexample, illustrating that it is mathematically possible for reswitching to occur? If I’m not misunderstanding, I would think that that is worrisome and worth exploration, but not devastating. It would seem, naturally, to motivate a second level of inquiry, looking for theoretical results about how commonly such systems occur in all possible systems and concerning what additional axioms outlaw those systems, as well as of course complementary empirical work to investigate where real systems seem to lie. Whether that kind of research program is important for mainstream macroeconomists to investigate seems to be a matter of priorities — if they’re pretty convinced reswitching isn’t empirically relevant, then it would seem a waste of resources for them to spend a lot of effort on it, but maybe it’s also good to have a small band of heterodox folks who believe reswitching matters trying to establish that.

    I do agree that there’s a little bit of hypocrisy in the appeal to realism, but even that I think is somewhat overblown; I am an advocate for various behavioral models as candidates to replace mainstream assumptions like rational expectations, expected utility, but I do not think it’s surprising that they haven’t become dominant in the literature yet. The gap between, for example, “prospect theory seems to better describe real preferences in single decision-maker settings in the lab” and “we have a working, new version of the CAPM and competitive markets that incorporates prospect preferences” is unfortunately large, and requires much work. But that work’s going on, in fits and starts. I think there will come a time when various behavioral models have been developed in a way that can be tractably fitted to addressing mainstream theoretical issues, but I don’t think we’re there yet with most work. Even foundational issues like non-existence of equilibria become problematic quite quickly with the present state of knowledge, if we just haphazardly throw behavioral models into their natural places in preexisting machinery.

    I also find the last point rather lazy. The social and individual levels of description both have their advantages and disadvantages in working economic science; to proclaim that we should just directly study one and deny the advantages of studying them simultaneously comes off as more rhetoric than science.

    • #5 by Unlearningecon on February 4, 2014 - 3:39 pm

      You are perhaps right about reswitching, but it’s worth noting that it is a logical problem: within a given model that includes capital, we cannot say a unique equilibrium exists. In any case, in my opinion reswitching is not really the main takeaway of the capital debates. The main takeaway is that there is no measure of capital apart from price, and that the price of capital cannot be separated from the profit it generates, rendering marginal productivity theory hopelessly circular.

      I understand that even when it’s clear there is work to be done, formulating new theories takes a lot of time – I mean, despite the fact that I hate production functions, I’d be pretty unable to articulate an alternative. Having said that, I think there are alternatives, such as Stock-Flow Consistent models, that economists often seem to ignore.

      And your final point is also well taken. The authors could be accused of overstating their case – it’s not that a reductionist approach is never useful, but that it is not useful for the task of understanding capitalism as a whole. Their approach is too sweeping and seems to imply the former.

      • #6 by rabidaltruism on February 5, 2014 - 4:00 pm

        Fair point on the aggregation problem. Disaggregation seems like an OK reaction to that, to me; I talk a bit at the end of this reply about why I think a combination of aggregate-disaggregate modeling and the consistency conditions between them is ideal.

        Regarding reswitching and non-uniqueness: you wouldn’t have much of economic theory left if you jettisoned everything with a ‘logical problem’ as common as non-uniqueness of equilibrium in some subset of models, haha. Game theory’s struggled with that one since its inception, the root difficulty (in my view) being that dynamic (as opposed to equilibrium/static) explanations are very difficult to develop in a setting where individual actors learn. Game theory’s particular program of equilibrium refinement has had mixed success, and was for the longest time pretty much devoid of empirics, but I think it’s the ideal sort of reaction to an example of non-uniqueness — starting up a research program to investigate whether, when, how, and why the non-uniqueness appears or doesn’t, and to see how relevant it is to the real world.

        I don’t really know much of anything about stock-flow consistent models; I’ve heard the term before, I think as one of the myriad theories plugged as having predicted the ’09 recession, but given my ignorance I’ll just keep quiet on them! All I can really speak to with any authority is the behavioral literature, since that’s what I’m most invested in, and there at least I don’t think the rate of model-adoption given model success has been unreasonable.

        I’m as skeptical of your restatement of the final point as I was on my read of the original, I think; I think the most common case is that studying a system at multiple levels of granularity and (successfully) enforcing accurate consistency conditions between those levels helps to clarify understanding of every level of description. The only reasons I can think of not to do this are a kind of belief in economic “strong emergence” or a belief that the costs of modeling the connections between lower and higher levels of description outweigh the gains from doing so. Strong emergence I find philosophically a bit fascinating but doubt has much relevance to any of the real-world systems we generally study, in econ or anywhere else, conversations about qualia and ‘mind’ being the one possible exception. The latter contention—that individual modeling is just more hassle than it’s worth, given how orderly system-level behavior is—seems more plausible, but I think some combination of: A) my thinking most economic models I’ve encountered that build from individuals to systems are not worthless, B) some form of the Lucas critique, and C) the desire in econ to be able to connect measures of individual welfare and freedom of choice to system-level outcomes … together leave me leaning strongly towards a combination of system and individual-level modeling in econ.

        That’s not to say that I think the present mix in macro is ideal; there seems to be an almost obsessive focus on the Lucas critique, to the point that all reduced-form models are shrugged off as ‘non-rigorous,’ and I think that’s a pretty bold mistake. I lean towards something more like Simon Wren-Lewis’s position (here, for example: http://mainlymacro.blogspot.com/search?q=lucas+critique ) in thinking that a description of both layers of granularity (individual / system) and their interconnections is ideal, and should generally be the aim, but there’s nothing wrong with intermediate projects that chase after reduced forms, and for the foreseeable future reduced forms may be all we’re capable of in many areas.

        Sorry for the length of reply—had a lot of words to use, I guess.

      • #7 by Unlearningecon on February 8, 2014 - 6:02 pm

        To me your comment reads as a prime example of how economists have – often eloquently and somewhat reasonably – complete boxed themselves off with a particular type of reductionist, equilibrium-centred methodology. Sometimes I sit in classes and watch an economist set up a problem that just screams for the use of (fairly basic) differential equations or some such, then instead opt to derive the ‘optimal time path’ of whatever system we are talking about, working from the perspective of some maximising individual agent. It confuses me that every problem apparently has to be answered in this way.

        Clearly, neoclassical economics tends to make some restrictive assumptions to ensure there are single/stable equilibria. But to be honest, are potential multiple equilibria really a problem when the ‘equilibrium’ of a system is largely irrelevant (note I am speaking mostly about macro/finance, not making a complete blanket statement)? Similarly, while the Lucas Critique is right, I see it as an ongoing problem concerning the evolving relationship between models, policy and the economy – and I don’t believe we can come close to ‘solving’ it with microfoundations. Finally, we see a similar problem with the concern for individual welfare: you cannot measure it cardinally, and so you cannot aggregate it, hence so the ‘social welfare’ measures in economic models are frankly rather meaningless.

        Essentially, it would be nice to have models that included all the decisions of agents – for precisely the reasons you list – in the same way it would be nice to have a unified theory of physics. But we run into the problem that this simply doesn’t work, so we have to opt for other methodologies, at least in places.

      • #8 by rabidaltruism on February 9, 2014 - 3:34 pm

        I’m actually perfectly alright with macroeconomic modeling using ODEs/PDEs/difference maps/chaotic attractors at a higher level than that of the individual agent, and I’m OK with models that try to focus on periodic, chaotic, or transient behavior rather than equilibrium behavior, too. I just think there needs to be a clear, at least reasonably strong argument made that the model being used A) accounts for the relevant body of macroeconomic well and with novel insight; B) is plausibly defensible from Lucas-style critiques; and C) that there’s a need in the area in question for deviating from explicitly representing the individual-system connections, and/or from assuming equilibrium or steady-state behavior. My presenting the case for equilibrium/disaggregrated modeling is because I think they together form the natural, comprehensible default, but I don’t think that should mean alternative approaches are outlawed, and in fact I’d love to see, for example, a coherent, accurate out-of-equilibrium approached developed, with or without modeling of individual agents. But for all the work I’ve read of, e.g., the Santa Fe Institute (on out-of-equilibrium modeling, zero-rationality ‘dumb’ agents, etc.), I don’t think I’ve seen a compelling case yet for any of those models as serious alternatives to mainstream work. Not yet, anyway.

        I agree that multiplicity of equilibria is not a problem if you can write down a convincing description of system dynamics, whether in an ODE, PDE, DE, or what-have-you. I’m not sure I understand why you’re skeptical about microfoundations addressing the Lucas critique; so long as the microfoundations are plausible, they’d seem to me to do just that by definition. Admittedly accurate microfoundations are a hard thing and it has become increasingly easy to point to examples of the presently dominant microfoundations’ failures, but we have some very good leads on good alternative models. I think it’s just a matter of time before we manage to grind them into the standard foundations, and that standard models’ resistance to Lucas-style critiques will generally be better off for it.

        I think cardinality/ordinality of utility should be treated somewhat separately from interpersonal comparability of utility, and that it depends on the particular preference foundation you’re addressing. In particular, game theory’s von Neumann-Morgenstern utilities are ‘cardinal’ in the sense of allowing for well-defined consideration of the ratios of differences in utilities so long as we stick with just with one person’s utility function, but there’s still nothing (as far as I know, anyway) in the VM framework to universally justify some particular method of making interpersonal comparisons of utilities. I imagine but am not sure that something similar holds for Savage or Ramsey-style subjective expected utility preferences, since they’re so closely related in formulation to VM preferences.

        Anyway, I think this level of modeling is important for evaluations of welfare, independent of a non-controversial method of making interpersonal comparisons of utility, because when we make (controversial) statements at a macroeconomic level about what is good for ‘aggregate welfare,’ a key consideration in evaluating whether welfare has improved for a single person is whether they chose the outcome in question for themselves, or whether they would have, if given the chance. Without modeling individual behavior we can’t really make statements about whether our model suggests that ‘most people are getting what they want,’ or anything like it, and so we lose the ability to invoke a kind of commonplace argument in reasoning about aggregate welfare. I don’t want to overstate the importance of this; obviously just because a person chooses something for themselves does not mean it is welfare-improving (e.g. drug addiction comes to mind, certain Nobel-prize winners’ arguments that addiction is welfare-maximizing notwithstanding…), and arguments that utility is literally equivalent with subjective/objective well-being are quite absurd. But I don’t think it’s importance should be dismissed, either; it is important if we are to morally reason in a sensible way that we understand whether and when people are experiencing out comes consistent with what they’d choose for themselves, under whatever restrictions are relevant.

      • #9 by Unlearningecon on February 14, 2014 - 4:43 pm

        Just to clarify a couple of points I made:

        (1) I don’t think that microfoundations as existing ‘solve’ the LC for two reasons. First, there are many reasons to believe that preferences, technology and all the other “deep parameters” of the economy (as Lucas put it) are perfectly liable to change with policy. To take an example, consider government investment in R & D and the technological improvement this creates, plus the new products created using this new technology, which effectively create new preferences. Second, I’m not convinced that economists’ current models have truly uncovered the characteristics of individual/firm behaviour – I mean, we all know utility is at best a formalism that works in some cases but shouldn’t be interpreted too literally. As Roman P. has pointed out, humans are so complicated economists just end up falling back on weak tautologies. Even if there are some “deep parameters” of human behaviour, we haven’t yet discovered them.

        (2) With welfare, I’m still not convinced. In general it’s perfectly reasonable for people to debate the efficacy of certain outcomes using various metrics: rights, equality, efficiency, the exploitation of the proletariat, etc. etc. Perhaps economists’ strong utilitarian framework can contribute to this, but I don’t think it’s necessary for us to be able to judge policy/outcomes. I can imagine a model without any explicit ‘welfare’ that still has obvious human implications, such as (say) a Minsky model that generate business cycles.

      • #10 by rabidaltruism on February 9, 2014 - 3:38 pm

        p.s. to “that there’s a need in the area in question for deviating from explicitly representing the individual-system connections, and/or from assuming equilibrium or steady-state behavior.” in my ‘expected argument C,’ I’d add that there should be considerable discussion of the robustness of the given model to detailed assumptions about its dynamic model, and to the qualitative plausibility of / evidence for its model of dynamics. These are I think very difficult things to argue effectively for in economics, particularly in macro, where direct, manipulable and well-controlled evidence of system dynamics is especially lacking, but if we want to see out-of-equilibrium analyses adopted, I don’t see how we can avoid expecting cogent arguments of this kind.

      • #11 by Roman P. on February 14, 2014 - 11:34 am

        rabidaltruism,

        Sorry for butting into your discussion. I think that the quest for finding the ‘best’ microfoundations is ultimately a futile exercise. It’s like trying to build thermodynamics from the basic principles of molecules’ motion, only instead of molecules with well-defined properties that obey a very finite number of precise laws we have humans whose properties and behavior is anything but well-defined and well-behaved. There are thousands of factors affecting even the simplest decisions, and most of them boil down to irrational impulse. Even other economic agents, like firms, are composed of people, and so too are prone to irrational impulse and just plain idiocy. Trying to build the economic model that will explain all economic decisions at once leads to tautological theories that people choose what they felt was best to choose at the time they chose and so on.

      • #12 by rabidaltruism on February 19, 2014 - 10:40 pm

        UE:

        Endogeneity of preferences is definitely an interesting problem from an agent-modeling perspective. I’m not sure I believe it’s as serious a Lucas-style threat as you’re suggesting, UE—while I certainly don’t believe we simply have a set of unchanging preferences independent of the world, neither do I think our preferences change or reformulate with every macro policy change. I don’t have much experience with the endogenous preferences literature, though, and I don’t think it’s particularly large or successful just yet, so this is admittedly mostly a matter of guesswork.

        Regarding welfare: I don’t mean to say that we can’t reason about welfare without a model of the preferences and choice-making of individual agents, just that we lose a useful formal tool for doing so, and one that has no real analogue in the physical sciences, where we don’t really care whether a particle is ‘getting what it wants’ or not. I certainly agree that it is possible to reasonably argue about the welfare implications of policy without modeling the choice-making of agents.

        Roman:

        No worries! Not like it was a private discussion.

        I’m ambivalent about the position you’ve taken; on the one hand, I think it’s a reasonable belief that the thicket of possible human learning mechanisms could make a non-equilibrium, micro-founded theory of learning deeply problematic. I worry that studies like that of, for example, Stahl ( https://webspace.utexas.edu/stahl/www/experimental/gx4lrn.pdf ), on ‘rule learning’ in games may generate highly plausible but badly intractable models of player behavior. On the other hand, human choice-making behavior is also often surprisingly coherent and even simple; the ‘(generalized) Matching Law’ ( http://en.wikipedia.org/wiki/Matching_law ) that emerges in low-information, low-technology, low-cognitive engagement, single decision-maker (and non-human animal) settings in psychology is a particularly nice example of just how orderly potentially quite complicated behavior can be, as is the finding in Camerer’s (http://www.amazon.com/Behavioral-Game-Theory-Experiments-Interaction/dp/0691090394) Ch. 3 work on mixed-strategy equilibrium that MSE actually do a surprisingly good job of explaining average, observed frequencies in experimental data.

        I guess I think that, if we are forced to deal with the full flexibility of something like Stahl’s rule learning in order to say anything of meaning, then we may be in trouble, unless we become a lot better at working with the ‘computational, agent-based’ models that have become one of the hallmarks of complex systems economics. However, I think the successes I mentioned above (and others, e.g. prospect and cumulative prospect theory) suggest that we can go quite a bit farther than we have thus far without worrying about all possible out-of-equilibrium learning rules.

        I also think this reinforces my original argument for equilibrium reasoning as the natural default in economics: a major motivation for doing so is that it is most often much easier to identify what formal conditions might describe a situation in equilibrium than it is to identify what kinds of transient behavior we’re likely to observe on the path to that equilibrium. This is true whether we model individual agents or not, but given concern of the possibly impenetrable thicket of learning rules, I think there is substantial merit in focusing on what equilibrium models can tell us before investing too much energy in exploring the hinterlands of learning rules.

      • #13 by Roman P. on February 21, 2014 - 1:24 pm

        rabidaltruism,

        Thanks for your reply. I think you bring up some valid points, but you’re still too optimistic about microfounded models of human behavior. It is true that there are regularities in the process of making choices, but even if we compiled all possible results of behavioral economics, we could still only reason about a ‘spherical human in vacuum’. The problem is that humans aren’t spherical, neither they do exist in a vacuum. In the economic life even the simplest economic facts are very complicated.
        For example: you buy a bag of grocery products in a store. Why did you buy what you bought? If we had a very thorough understanding of human behavior, we could reason that on average you choose what you chose previously, you choose this and not that because you like diversity, and so on. But that’s on average. What you actually chose depended on a very big number of factors that is impossible to factor into any sane model. You were hungry and looked at a colorful Mars bar, your synapses fired and you just had to buy it. You watched an advertising a day before. You have an allergy to other brands. And so on.
        The total number of variables that goes into that decision is really ridiculous. A model has to be a simplified copy of the original, but its likeness to the original must be good for it to be useful. I feel that those requirements are in conflict when we are talking about economics.

      • #14 by rabidaltruism on March 5, 2014 - 3:10 pm

        Sorry for the long absence! Was trying to decide whether to reply again. (Obviously I decided to do so!)

        Anyway, I think you’re right that the sheer number of possible learning rules and potential bits of datum used in human decision-making, roman. That’s exactly the concern I had in talking about the ‘rule-learning thicket’ with regards to Stahl’s model of rule-learning. If we try to consider all possible ways in which human beings make out-of-equilibrium decisions, I do think we will run into lots of contexts where the reality is frighteningly messy, and very difficult to approximate with a good model. I *don’t* think even that would be impossible, but it’d be a heck of a challenge. On the other hand, I think your reply really reinforces my focus on equilibrium behavior; the idea is that we dramatically reduce the relevant varieties of data and decision-making rules if we only look for situations where choice-making is in some sense self-consistent. There are drawbacks to this approach, of course—we end up not really sure how we got into equilibrium, and have to return to the tough questions in some fashion if we want to argue for one equilibrium over another—but on the whole I think it can give you coherent, useful theory that deals with a dramatic but not implausible simplification of the very complex system you’re trying to model.

        I think there’s also another reason not to worry as much as you are about the variety of possible data in human decision-making: we can make theories based on notions of utility (as in Savage/von Neumann-Morgenstern/Ramsey-style decision theories) or value (as in cumulative/original prospect theory) that are largely agnostic about the details of our choices. That is, we don’t really particularly care what the determinants of every single individual person’s utility or value functions are; we just care that they behave as if they have *some* such reliable approximation to a utility or value function. This is a much less needy hypothesis; it is consistent with all kinds of decision-making rules and usages of data and doesn’t require that we know the precise form of any or every given person’s preference functions in order to say something meaningful.

        Also, as an aside! I argued to UnlearningEcon above that the modelling of individual agents was an additional advantage in economics because it lets us directly model a major component of human welfare, i.e. ‘whether we get what we want.’ Later, I found myself reading the opening of David Romer’s Advanced Macroeconomics, and wandered on this quote to similar effect:

        “Relaxing the Solow model’s assumption of a constant saving rate has three advantages… Second, it allows us to consider welfare issues. A model that directly specifies relations among aggregate variables does not provide a way to judge whether some outcomes are better or worse than others: without individuals in the model, we cannot say whether different outcomes make individuals better or worse off. The infinite-horizon and finite-horizon models are built up fro mthe behavior of individuals, and can therefore be used to discuss welfare issues.” (p. 6)

        Nothing new there I hadn’t argued already above, and I suspect a standard advanced macro textbook doesn’t hold a lot of intrinsic weight on this particular blog, but — as I am *not* an economist by training, whether micro or macro or heterodox — I found it reassuring to read that I was thinking along the same lines as a leading macroeconomist. Also, just a strange coincidence that I’d happen to read that passage after making the same argument here!

      • #15 by Roman P. on March 5, 2014 - 8:26 pm

        rabidaltruism,

        Thanks for answering! Well, I’ll go point by point:
        1) Certainly, we could save us major trouble by only explicitly modelling things that could be modelled. There are some economical interactions that are equilibrium behavior and so well described by mainstream micro theory. I like to think of them as small though experiments, and they are not bad in that role: I like Menger’s model of trading of horses, which Varian remade as demand and supply analysis of house market.
        But only picking the ‘easy’ interactions already defeats the sole purpose of having microfoundations, that is, building macro on them. How do we aggregate unknown entities? It’s like trying to arrive at thermodynamics laws by studying the behavior of individual molecules, but deciding that we’re going to pursue a theory where molecules hit each other only at the right angles ever. We won’t get any useful thermodynamics despite it maybe being really internally consistent.
        2) There are problems with aggregating in general. Even if completely know all the rules of how an individual agent, or a pair of agents behave, we don’t really apriori know how an economy consisting of such agents will behave. Arrow and Debreu proved a result in the general equilibrium theory that under some very strict rules, there could indeed be a general equilibrium. Later, in 1970s Sonnenscein, Mantel and Debreau mathematically proved that even under those stringent rules, if we have different agents and goods in the general equilibrium model, their very well-behaved demand curves won’t aggregate into a similarly well-behaved aggregate demand curve. GE theory result was, apparently, ‘whatever’. That stuff is omitted out of even the most advanced micro textbooks, curiously.
        My point is that Arrow and Debreau used the most in-equilibrium model conceivable, in which all agents knew the future and all trades happened right at the start as if guided by God himself. And still, its result was ultimately disappointingly vague – ‘everything goes’. What is going to happen as we get more and more close to the real world, with irrational agents and interactions happening in a physical environment in a historical time? Nothing good for our models, I think. There are just some limits to what we can do: in physics a seemingly simple three-body model is unsolvable analytically. I don’t think economics somehow has it easier.
        3) Sure, we can think in the terms of somehow-defined utilities, but it’s not really all that useful. Either we have a good underlying model of how a human operates – and as we discussed, it is impossible practically – or utility-based models get so vague they get useless. Even if we could make meaningful propositions concerning the behavior of agents using the theories of utility, no one guarantees they will be true. This is similar to the representative agent theories – where theorists propose we can substitute multiple agents by one who in aggregate behaves like them and so sidestep the SMD problem. Yet, just because this assumption makes models tractable, it doesn’t make them good. Models, most of all, must resemble the original; this will almost never be the case if we only pick nice phenomena and go on with ‘as if’ assumptions.

  4. #16 by Boatwright on February 4, 2014 - 4:21 am

    There is a set of assumptions held by the defenders of capitalist political ideology. Ironically most of them are empirically weak or downright false. Examples: Markets regulate themselves because markets always naturally reach equilibrium. Capital flows are always efficient, with competition inevitably resulting in the best outcome. Sound banking only requires the functional equivalences of the gold standard or a real bills doctrine. Any and all economic planning and business regulation is a-priori not only unnecessary but actually destructive. Etc, etc.. One hears and reads this sort of nonsense every minute of the day — pronounced as fact by politicians, so-called economic “experts”, and journalists. The inevitable conclusion is that unfettered capitalism is the sure and only path to the best of all possible worlds. The existence of this set of beliefs is so ingrained that many who would see a better way to organize ourselves socially find themselves not knowing where to begin. The ideas of Marx and other critics of capital are rejected out of hand.

    Meanwhile, the capitalist cycle of accumulation, oligopoly, with ever growing exploitation and impoverishment continues unquestioned.

  5. #17 by Roman P. on February 4, 2014 - 6:48 am

    I don’t know how anyone can continue to aggregate K in good fate sixty years after Leontieff. Compared to physics, it’s like as if the best academic journals for physics nowadays still had discussions that started with “assume the atoms are like plum puddings inside”.

    I think that the major problem of economics, even regardless of orthodox/heterodox schism, is not giving enough of attention to the technological structure of production on the level that is intermediate between the atomistic economic agent and the whole economy. I’ve seen a term mesoeconomics for such level of analysis, I think.

    • #18 by Unlearningecon on February 7, 2014 - 12:43 pm

      Sadly, heterodox economists do this a lot too…

  6. #19 by Jon Cloke on February 4, 2014 - 1:42 pm

    The Nitzan and Bichler book is exceptionally interesting in opening out economic thought to the possibilities inherent in capital-as-power, but it also points out weaknesses in other academic disciplines as they try to grapple with globalizing capitalism; the central flaw in all of them (financialization in geography, critical capitalism in political economy, etc.) is that they take capitalism at face value and try and explain/analyse it using many of its’ own precepts and much of the language of orthodox economics – as Ian Bruff (2011) puts it, capitalism itself is the ‘gorilla in the room’ which no-one talks about..

    In a conference in Helsinkin in 2011 under the banner ‘Dictatorship of Failure’ I floated some work I’ve been doing (the paper is available here online (free) – http://hdl.handle.net/10138/41990) as a geographer on the necessity to start seeing capitalism as a chaotic, complex, evolving organism rather than, as Larry Summers put it, the framework for a set of laws that work everywhere, ‘like engineering’. I make particular reference to the ICT revolution that has given capitalism a different dimension through the explosion of possibility inherent in cyber-spaces, which some of you may find interesting… happy to discuss this further with anyone interested!

  7. #20 by Leland LeCuyer on February 4, 2014 - 3:25 pm

    I won’t pretend to understand the Cambridge-MIT Capital Controversies. I don’t. But insofar as this unresolved ideological debate centers about how important capital is within capitalism, I hear echoes of a much larger deprecation of capital, namely an ignoring of the vital importance of natural capital or, as it more commonly called, natural resources. Without natural resources there is no economy. Nor can there be. This leads to what ought to be the first principle of economics and the teaching of economic theory: that economics is a subset of ecology.

    • #21 by Boatwright on February 4, 2014 - 4:03 pm

      Exactly!

      If economic assumptions, theories, and models do not start with an understanding of more fundamental sciences, such as ecology which itself is built on foundations of evolutionary biology, thermodynamics, etc., it will always be epistemologically flawed.

      One such assumption is the infinite substitutability of resources. The market is always able to find an equilibrium response to variations in the supply of commodities. If we run out of cheap oil, the market will handily produce a replacement. The capitalist marketplace will always find the best answer to problems of over-population, resource depletion, etc.. This is ecological and thermodynamic non-sense.

      Economists can build all the castles they like. But no matter how elegant their turrets and crenelations, without consistency with established science, such castles are standing on nothing more than hot air.

      • #22 by notsneaky on February 4, 2014 - 11:01 pm

        Ecology actually looks a lot like economics. It uses the same functional forms, it ignores the same aggregation problems and sneaks in optimization through the back door. It’s got its own problems (for example I’ve never been able to find a coherent and non trivial definition of terms like “carrying capacity” or “subsistence level”). They make up some crazy stuff too, trust me. And I don’t mean that in any political way. Basically they do it because modelling complex systems is hard, regardless whether you’re an economist or an ecologists.

        And no, economists do NOT assume “infinite substitutability of resource”. And in fact that is not necessary for most results. For us to “not run out of oil” (not sure what that means, so I’m implicitly guessing here) just need that there’s o *infinite complementarity*. And “No infinite substitution” does not imply “infinite complementarity”.

  8. #23 by Robert on February 4, 2014 - 7:13 pm

    Some literature connects the Cambridge Capital Controversy with the theory of natural resources. I think of Richard W. England’s “Production, Distribution, and Environmental Quality: Mr. Sraffa Reinterpreted as an Ecologist” (Kyklos, 2007) and the last chapter of Bertram Schefold’s Mr. Sraffa on Joint Production and Other Essays (Unwin Hyman, 1989).

    Natural resources are included either as a special case of joint production or by analyzing land as a second unproduced input, like labor. Ian Steedman has many ‘paradoxical’ examples for the latter. I have examples for the former, e.g., http://robertvienneau.blogspot.com/2009/07/now-judge-i-had-debts-no-honest-man.html.

  9. #24 by Ramanan on February 4, 2014 - 8:04 pm

    … Moreover, the production function has been a powerful instrument of miseducation. The student of economic theory is taught to write O = f(L,C) where L is a quantity of labour, C a quantity of capital and O a rate of output of commodities. He is instructed to assume all workers alike, and to measure L in man-hours of labour; he is told something about the index-number problem involved in choosing a unit of output; and then he is hurried on to the next question, in the hope that he will forget to ask in what units C is measured. Before ever he does ask, he has become a professor, and so sloppy habits of thought are handed on from one generation to the next.

    - Joan Robinson

  10. #25 by notsneaky on February 4, 2014 - 10:50 pm

    ” If there is solid evidence that reswitching isn’t important, that’s fine, but then we should also take on board that agents don’t optimise, markets don’t clear, expectations aren’t rational”

    You’re confusing results (which may or may not have real world relevance) with assumptions. The Laffer curve tells us that at very high tax rates tax revenue will fall with further tax rate increases. That is probably of no practical relevance though as the tax rates required to produce such an effect are simply not observed in practice. Throw out all economics!

    (also, if the CCC applies to capital then it also applies to labor (can’t measure quantity of labor either), and to output (can’t measure gdp, and it makes no sense to say “we’re in a recession”). Robinson & Co were obsessed with capital because they were arguing in the shadow of Marx, but as some Neo-Marxians showed (and Sraffa before them too actually, iirc) there’s really nothing special about capital when it comes to the aggregation problem)

    • #26 by Unlearningecon on February 4, 2014 - 11:33 pm

      First, I don’t disagree about Labour and GDP. I think understanding them is an important area that needs more attention devoted to it (which is what the authors of this book are doing with Capital; they also criticise the Marxian notion of abstract labour based on its undefined units, a criticism I agree with). It’s also worth noting that Ramanan’s quote above shows Robinson et. al were acutely aware of this problem.

      Second, I’m not sure I agree reswitching is a ‘result’. It seems to me to be a mechanic that arises from the assumptions of the model, but not a prediction in itself: the predictions of a model are surely the results for the dependent variables, which reswitching would imply have multiple equilibria. We might therefore compare reswitching to something like the Euler equation (which is derived rather than assumed) and ask how realistic that is, too.

      Basically, although my original comparisons to market clearing etc were a bit crap, I still think the point is valid in that if we’re examining some key mechanics of models, we should also be examining others.

      (By the way, with Laffer I’d suggest the curve itself is in fact a hypothesis that is of little use and so should be thrown out.)

      • #27 by notsneaky on February 5, 2014 - 3:33 am

        A “mechanic that arises from the assumptions of the model” is a result. Just substitute the word “results” for the word “arises” in that sentence and you’re there. It’s a prediction of a possibility.

        (By the way, with reswitching I’d suggest that it itself is in fact a hypothesis of little use and so should be thrown out. We could meet half way here)

      • #28 by Magpie on February 6, 2014 - 7:02 am

        ” If there is solid evidence that reswitching isn’t important, that’s fine, but then we should also take on board that agents don’t optimise, markets don’t clear, expectations aren’t rational”

        I agree with notsneaky here, although the thing is rather subtle.

        Regardless of the empirical considerations, we can put things this way:

        Due to their axiomatic/deductive nature (as you mentioned somewhere else) economists start from a set P of premises (or assumptions or axioms) and deduce a set C of conclusions or theorems.

        “Reswitching is possible” is a theorem, a conclusion (a result, notsneaky calls it): an element of C, the second set. If we don’t like the result, we need to identify and change the assumptions responsible for it.

        “Agents optimise”, “markets clear” and “expectations are rational” are themselves assumptions: elements of P, the first set. If we don’t like any of these assumptions, we can change them; but we’ll need to accept a different set of conclusions.

      • #29 by Unlearningecon on February 7, 2014 - 12:50 pm

        Let me first disown my original examples, because you are right about them. Although Michael John’s quote about Joan Robinson below puts what I said much better.

        In any case, I think there are more than P and C: there’s I, which are intermediate properties that are derived from the premises but aren’t actually conclusions. In my opinion reswitching, the Euler equation, and the SMD theorem fit this description. A major tension between heterodox economics and mainstream economics – at least rhetorically – seems to consist in whether or not these elements are held up to scrutiny. From what I know the Euler equation is pretty weak empirically, and at least one economist defends it on the grounds that “all models are wrong!”

    • #30 by srini on February 5, 2014 - 1:48 pm

      With your moniker, I hope you are not sneaking away from the fact that capital aggregation problems go way beyond “reswitching.” Even if reswitching were porved to be empirically useless, capital aggregation and the invalidity of production function would still be theoretically true for a number of other reasons.

      • #31 by notsneaky on February 6, 2014 - 12:52 am

        Not sure you conclusion follows from your premise. It could very well be that individual micro level production process are funky and the substitution possibilities, to the extent they exist, weird, yet at the macro level stuff evens out nicely so that an aggregate production function provides a perfectly useful description of the economy. Solow and Fisher ran some simulations once which showed that this was a pretty likely case, couldn’t quite figure out why, scratched their heads and went back to work on more important stuff (well, Fisher didn’t, he went back to hating on aggregate production functions).

        (Alan Kirman made the same argument in respect to preferences and the SMD theorem)

        And part of the point is that aggregation poses problems not just for marginalist theory, but for any macro type theory (and you really can’t do robust micro without doing macro – general equilibrium effects can matter – so throw micro in there too). The capital paradoxes are brought up in the context of marginalism, because that’s the theory that is dominant, But that just means that other “heterodox” theories have never been subject to the level of scrutiny that is devoted to criticizing mainstream. So some folks happily point out problems with mainstream macro, while simultaneously completely ignoring the fact that their arguments apply as much to their own pet theories (I guess one can exclude here those people who’ve devoted their professional lives to being simply “counter-example generators”)

        BTW, this made me re-read the old post and discussion on Nick Rowe’s page:

        http://worthwhile.typepad.com/worthwhile_canadian_initi/2012/08/switches-reswitching-capital-food-and-swallows.html#comment-form

        Nick’s and commentator david’s comments are particularly insightful, along with a few others. (Sorry if someone linked this already)

      • #32 by rabidaltruism on February 6, 2014 - 1:42 am

        Can’t seem to reply directly to notsneaky’s latest post, I guess there are limitations to how many horizontal shifts WordPress will allow in replies? Anyway, was wondering if you had a reference (or several!) for this, not sneaky:

        “Solow and Fisher ran some simulations once which showed that this was a pretty likely case, couldn’t quite figure out why, scratched their heads and went back to work on more important stuff”

        Not a challenge, mind you—I’d legitimately be interested in reading that.

      • #33 by Unlearningecon on February 7, 2014 - 12:51 pm

        If you click reply on the comment above where the ‘thinnest’ thread starts, it will appear below the last comment on the thin thread.

        Hope that made sense.

      • #34 by rabidaltruism on February 6, 2014 - 2:41 am

        Also—-that Nick Rowe blog post was fantastic. Thanks for that!

      • #35 by notsneaky on February 6, 2014 - 4:20 am

        (Not sure how this reply will show up – wordpress is a mess)

        Here’s one version of the paper by Fisher, Solow and Kearl, There are couple related ones out there, one of which I think I linked to in one of the previous UE posts.

        http://dspace.mit.edu/bitstream/handle/1721.1/63268/aggregateproduct00fish2.pdf?sequence=1

      • #36 by Luis Enrique on February 6, 2014 - 1:48 pm

        notsneaky would you be so kind as to email me at

        luisenriqueuk at the g mail

    • #37 by srini on February 5, 2014 - 1:56 pm

      Yes, aggregation is a problem, mostly for marginalist theory. All “real” quantities are fiction–only the nominal is real! And there is no issue with aggregate nominal dollar values. Deal with inflation by scaling with a nominal quantity, such as GDP. You have a problem only if you want to have microfoundations.

  11. #38 by allis on February 5, 2014 - 7:17 pm

    Nitzan and Shimshon raise interesting questions in Capital As Power.
    The most interesting questions in any culture are the questions that are never asked. During the Middle Ages Scholastics debated the nature of God, but never asked the question, Is God? Today economists debate the nature of Capital (and Money), but never ask the questions, Is Capital? Is Money?

  12. #39 by Michael John on February 6, 2014 - 4:00 am

    On Ferguson’s “practical importance” argument, Joan Robinson argues that “[n]othing could be more idle than to get up an argument about whether reswitching is ‘likely’ to be found in practice.” Not only does a pseudo production function not exist “in reality” but also it would not be possible to move along it to pass over switch points. “[For] there is no such phenomenon in real life as accumulation taking place in a given state of knowledge.”

    Quoted from G. C. Harcourt and Prue Kerr, “Joan Robinson” p.104

    • #40 by Unlearningecon on February 7, 2014 - 12:53 pm

      As always, Robinson puts it better than I could.

      • #41 by notsneaky on February 10, 2014 - 3:50 am

        What does that last sentence mean?

      • #42 by Unlearningecon on February 10, 2014 - 1:23 pm

        I would say it means that accumulation implies A is changing in some sense.

  13. #43 by Ben B (@likeasecret) on February 6, 2014 - 6:24 am

    Honestly, after studying through MWG I don’t really think that the disagregation technique is all that bad, you build up a fairly rigorous theory of how production works in the perfectly competitive world that can provide support for what motivates profit maximizing behavior as being the optimal choice in this world.

    I think this is a significant and important result that economists (both orthodox and heterodox) maybe undervalue – this says that in a world without cooperation, without issues of wealth distribution, where everything is frictionless – then doing what is best for you is actually the socially optimal. The significance then of using this as a baseline is to see whether the welfare theorems still hold when you start to relax these assumptions. And often times they don’t.

    • #44 by Unlearningecon on February 7, 2014 - 12:58 pm

      I’ve seen this defence before, and it makes economic theory seem like more of a rhetorical device used for the purposes of political philosophy than really ‘science’. We’ve already made value judgments to be able to call the baseline ‘optimal’. Now we’re judging reality as a deviation from this baseline to try and make ‘positive’ statements about the economy? I’m trying to think of an analogy with engineers using a perfect gas as a baseline and asking what make gases ‘deviate’ from this – it just seems absurd.

  14. #45 by costata001 (@costata001) on February 6, 2014 - 7:30 am

    Thank you for the link to this book. I have downloaded it. I have been looking for a book that trawls through the different schools of capital theory. Judging by the contents pages of this book it may contain what I’m looking for.

    Cheers!

  15. #46 by srini on February 6, 2014 - 5:37 pm

    I think there is a general obfuscation of the issues here. First argument is aggregation is problem for all macro theories. Perhaps, but it strikes at the heart of the microfoundations revolution. If you cannot get from micro to macro, the whole of the last thirty years of macro is basically illegitimate. There is no way around it. Second, you cannot argue that even if micro behavior creates funky curves, the macro aggregate is well-behaved because the reason for the well-behaved is clearly not micro foundation. It is something else. Bottomline, there is no way to rescue DSGE. There is no pussyfooting around that.

    Let us take the other argument–aggregation problems in things other than capital. Yes, but capitalism is about the accumulation of capital–other factors are distinctly secondary. So, misunderstanding the nature of investment, what motivates it is fatal.

    Last, one can build macro theories at the aggregate level by using nominal quantities and scaling them with appropriate nominal quantities. There is no aggregation problem with nominal dollar values!

    • #47 by notsneaky on February 7, 2014 - 1:40 am

      It does have implications for the microfoundations revolution but it does not follow from that “last thirty years of macro is basically illegitimate”. What it means that one way or another, whatever macro approach you want to take, you’re gonna have to bite the bullet and make some strong assumptions. You can’t have a “uber general” model because then you’ll get the result that “anything can happen”. There’s a meta theorem here: any sufficiently general theory will produce either “anything can happen” results or “impossibility” results. It’s just that mainstream econ has “gone there”, into the forbidden zone (to steal from one of Nick Rowe’s commentators), while lots of the heterodox haven’t.

      I don’t like DSGE but not really for the microfoundation reasons. Basically, IMO, the average DSGE model is 85% signaling (I can do complicated math!) and 15% insight. If that. Still 15%>0% and often > alternatives. I’d like to see its uses limited and see it coexist with other approaches, but not eliminate it.

      And macro that only deals with nominal quantities exclusively is really of no use to anyone.

    • #48 by rabidaltruism on February 7, 2014 - 2:06 am

      I honestly don’t really see how there was anything forceful at all in the original post as a critique of explicitly modeling micro-macro connections. That’s not where the reswitching issues and CCC come from, unless I’m totally confused; rather, those issues arise from theoretically possible and empirically sometimes-maybe-there non-monotonicities in the interest rate’s impact on the relative desirability of two or more technologies.

      But if you explicitly model the micro-macro connection, then as far as I can see you don’t need a unique language of capital intensity, or the ability to write down aggregate capital in some coherent units. You just model individual behaviors at some appropriately disaggregated level and can then study the aggregate outcomes of that, being careful to define ‘aggregate outcomes’ in a way not undermined by the CCC.

      Really the only bit of the original post that seems to forcefully object to all micro-macro modeling is the last bit — the part about ‘modeling the whole system of capital’ — but that is the least compelling piece of the post. It’s more an article of faith than an argument.

    • #49 by Robert on February 7, 2014 - 12:18 pm

      I’m with srini and Magpie. A lot more than reswitching is at issue. Nobody, for example, has ever found a set of interesting assumptions on technology to ensure that Price Wicksell Effects and Real Wicksell Effects always go in the “non-perverse” direction. Competent neoclassical economists (e.g., Edwin Burmeister) know this.

      Sraffa effects can be shown in models with multiples types of labor and multiple types of land. Ian Steedman has produced any number of such examples. Furthermore, when people buy, for example, a corporate bond they are acquiring an income stream; they do not care about what concrete capital goods are used in generating that stream. So I do not take the point about aggregating labor and land.

      Not only is the microfoundations of (mainstream) macroeconomics a joke, so are many of the supposedly applied microeconomic stories told by mainstream economists.

      • #50 by notsneaky on February 8, 2014 - 5:46 am

        This doesn’t matter. From what I understand (and if I don’t, it also doesn’t matter) UE used the term “reswitching” as a blanket term for all possible capital related paradoxes – actual reswitching and Wicksell effects (strictly speaking incorrect, but we went with it). The objection is still the same: just because you showed that something CAN happen in a general enough model, does not mean that IT WILL happen in the real world. Same as the Laffer Curve. All these arguments make a basic logical fallacy: if not Q then sometimes not P. Not Q, hence always not P. It doesn’t work that way. Like I said, you can devote your professional life to being a “counter example generator”, like Steedman, and there’s a positive value to such exercises. But the positive value of such exercises does not mean that other exercises, which assume not P, are value-less (“a joke” in your terminology). All we know is that sometimes not P and sometimes P. We could stop there and say “sometimes not P and sometimes P, it’s probably the Aliens”. Or we could say, ok, let’s assume Q and see what P looks like.

        Alternatively, we could assume not Q, but R, then P`. But that’s what heterodox theory is completely lacking in. Worse, they seem completely oblivious to the fact that not Q also implies not always P’ (the conclusion they want).

        It’s the Underpants Gnomes Argument. Capital paradoxes! … … … Socialism! What’s the “… …. …”? All you’ve shown is that in a general enough model “anything goes”. Thanks. We sort of knew that already. Anything else?

  16. #51 by notsneaky on February 8, 2014 - 6:21 am

    If you want to understand the importance and relevance of the Cambridge Capital Controversy, here it is, by analogy:

    The other day I got into an argument with some Random Guy On The Internet about the Thirty Years War.

    The Random Guy On the Internet was saying some crazy things about the Thirty Years War. I disagreed.

    In my discussion with him, in laying out my argument I slipped and said something which wan’t “strictly true”. As in, it could be true or it could not be true.

    The Random Guy On the Internet, a smart but somewhat obsessive person, picked up on that and harped on it. “What you said isn’t strictly true. I can provide a counter example!”.

    Initially I was (wrongly) defensive. I tried to argue that it was in fact “strictly true”. We argued and argued and argued and it got pretty involved. At the end of the day it turned out that he was right, that particular something I said wasn’t “strictly true”. As in, it could be true or it could not be true. At that point I also realized that the fact that it wasn’t “strictly true” was because if you think about the Thirty Years War enough, pretty much anything could be true. The Aliens came down and won the battle of White Mountain for the Hapsburgs.

    I was pretty exhausted by the discussion at that point. I told him “you’re right, that one particular something I said is not “strictly true””. Then I went away and worked on some stuff which was more important, there’s only so much internet argument you can handle before you run into negative marginal returns.

    The other guy saw it differently. Ever since then he’s been dancing around the internets, doing a little jig, repeating “even notsneaky admitted that what he said was not strictly true! I win! I win! I win!” Strangely, he thought that because I was wrong on some particular thing which turned out to not be “strictly true” that PROVED that he was right about the original crazy things he was saying. Weird, I don’t understand people who think that way, but that’s the world (especially the internets).

    Now. The last part – that’s basically the “Samuelson admitting he was wrong about CCC” right there, that gets trotted out every time the CCC gets mentioned on the internets (I know, I’m being quite presumptuous here). Every time I read about CCC I go back and re-read some of the original papers. And I’m pretty sure I got it right. Samuelson was wrong, he basically said he was wrong, mostly because he was bored with a debate, because it involved arguing about stuff which didn’t really matter anyway. I.e., he realized that he was being trolled by the British Cambridge folks (of course they were right on the particular particular, and yes, they themselves believed that what they were arguing about was super important (but that’s on them)). The economic profession has been right in basically following Samuelson in not worrying about it anymore. There’s better things to do.

    • #52 by srini on February 8, 2014 - 2:42 pm

      Yup, basically call the other guys a troll and expect to settle the debate. Way to go. BTW, who is sneaking here? You go on and on about CCC when you full well know it is more than that. I think Joan Robinson, despite her fatuos fascination for Marxist economics, actually had more insights in one small book than Lucas and company combined. And she did not need to “signal” with math because her reasoning was sound. On the other hand, we have Nobel laureates arguing the Great Depression was caused by people taking an extended vacation. Yes, the economics profession has moved on, moved on to become more stupid.

      • #53 by Unlearningecon on February 8, 2014 - 5:33 pm

        I’m just going to quote Dean Baker, as I think he sums up this debate best:

        Unfortunately, these debates were sidetracked into a narrow and largely irrelevant discussion of the possibility and likelihood of “re-switching,” a story where a production technique flips from being less capital intensive to more capital intensive as the interest rate rises or falls.

        From my perspective the main takeaway from this debate is that there is no measure of capital that is independent of its price. How do we compare a steel mill, the latest supercomputer from IBM, the software produced by Google and the method for producing a lifesaving cancer drug whose patent is owned by Pfizer? Are we going to weigh each one, takes its volume? There is no measure of capital apart from its price.

        It’s hard to argue the neoclassical theory has illuminated our understanding of capital. The attempt to measure capital by its price is logically and conceptually flawed – I’d add ‘empirically’, but to be honest the production function is closer to Not Even Wrong than false. Yet truly understanding capital is an important area, and is what the authors of this book are trying to do.

      • #54 by notsneaky on February 9, 2014 - 12:27 am

        Not all of the debate, but a significant portion of it definitely seems to have had that aspect. Again, at the end of the day – what are the practical implications of the CCC? Basically, none. That’s how I see the debate. You’re of course free to hold a different opinion.

        And you can trudge out Prescott if you want but that’s an n=1 observation, he did not get the Nobel prize for his views on the Great Depression but methodological innovations, and pretty the vast majority of the profession disagrees with him on the Great Depression and many other things. In other words, nice strawman, but sorry, no cake.

      • #55 by notsneaky on February 9, 2014 - 12:31 am

        And I don’t have a problem with Dean Baker (in fact I like most of his stuff).

        If the production function is “not even wrong” then how come it can be used to predict output and income shares pretty well? Why does it fit the data? Note that neither the income shares nor the data are constructed from production functions, but rather independently. Something’s there, even if we can’t quite figure out what.

  17. #56 by srini on February 9, 2014 - 4:01 pm

    Notsneaky,

    Either you are not aware or… Production function fits the data because…see Shaikh, Fisher and Felipe.

    BTW, taking a vacation was not just Prescott but Lucas as well. And it is a pretty big deal. The watershed event–that gave the impetus for macro as a separate field–and that is the explantion. It is not n=1, it is n= everything.

    Now let us take methodological innovations–DSGE, calibration–BS. Time inconsistency–not original–again do some reading. Hodrick-Prescott filter–BS. Sum total, either wrong or not original.

    let us not try being sneaky here…

    • #57 by rabidaltruism on February 9, 2014 - 4:29 pm

      Solow’s 1974 reply to Shaikh’s original HUMBUG article demonstrates using Shaikh’s data that with an appropriate test, the hypothetical HUMBUG data set would disconfirm the aggregrate production function. Is there a rejoinder (to Solow’s rejoinder) of which I’m unaware? (ref: http://digamo.free.fr/solow74.pdf )

      Or do Fisher and Felipe maybe make a different critique from that of Shaikh? I’ve never seen//read any of their stuff.

      • #58 by srini on February 9, 2014 - 9:31 pm

        Yes, please read Felipe and McCombie and they demoslish Solow, Actually, there is nothing to demolish because Solow’s refutation was humbug. Typical of mainstream, they went by authority and decided that Solow had brushed away some pesky arguments.

    • #59 by rabidaltruism on February 9, 2014 - 9:57 pm

      I might just be misunderstanding, but Solow’s reply didn’t look like an appeal to authority to me; he worked out a specific example, using the same data as Shaikh, showing that the aggregate production function could be falsified if you used the testing strategy he advocated, as opposed to the one used in the Shaikh paper. I don’t mind reading extra material for my own sake, but some kind of explanation as to what exactly you find lacking in Solow’s counterargument, or what counterarguments there are that have been raised to his rejoinder, would be nice.

      • #60 by Unlearningecon on February 9, 2014 - 11:11 pm

        I too thought Solow got the better of Shaikh, but apparently I missed something of a debate and need to do more reading. I searched and found this paper surveying debates about the production function, which contains the following passage in a footnote:

        Throughout Shaikh’s articles on the HUMBUG function has been the subtext of a debate with Robert Solow. The original 1974 RES article, published as a “Note”, was followed by a curt dismissal of Shaikh’s argument by Solow. Marjorie Turner (1989, 195-6) has an account of this from which the following remark by Joan Robinson to Alfred Eichner is taken:

        “I suppose you know that Shaikh’s Humbug article was published in Review of Economic and
        Statistics as a note not an article, and that Solow’s reply was not shown to Shaikh…nor was he
        given the usual right of replying. This is a clear case of bias in the journals and I think you should
        make the maximum fuss about it. Solow’s reply is evasive, silly and abusive as usual” (Robinson
        to Eichner, 21 June 1974; quoted in Turner 1989, 196).

        Shaikh (1980) contains both an extension of the original model as well as a reply to Solow’s 1974 critique. Solow (1987) is a response to Shaikh (1980), to which Shaikh (2005) responds, although Shaikh does not directly address many of the issues in Solow (1987). The latter is much more thoroughly done in McCombie (2001).

        I’d buy Felipe and McCombie’s latest book on the production function, but it’s prohibitively expensive.

      • #61 by rabidaltruism on February 10, 2014 - 3:16 am

        I didn’t realize they’d gone back and forth so many more times! I guess I have some reading to do.

        And good God, prohibitively expensive is right! Even allbookstores.com can only find it for $116. Well, I’ll grab it through our library if my interest is still piqued after reading the sequence of papers.

      • #62 by notsneaky on February 10, 2014 - 4:20 am

        Sorry to say but Solow here is perfectly right and Shaikh and others simply missing the point. Yes, you can always fit a Cobb-Douglas production function to data (you can actually fit whatever the hey you want to data). You can even get coefficients with the right sign – exactly because of the income identify.

        But there’s nothing here which guarantees that the fit will be good (that’s Solow pointing out – too nicely IMO – that yes you can estimate a CD relationship for the production function which actually spells out “Humbug”, but unsurprisingly and obviously the estimates will be, and are, crap) or that the income shares estimated from a CD production function will be anywhere near the actually observed (and independently measured) income shares.

        Think of it this way. If it was always possible – because of the income identity – to fit a CD production function to the data and get “good results” then the data would never reject the CD production function. That’s what Shaikh et all are saying – that the data will never reject such a function. They’re wrong, it can happen, which shows that they’re missing the point.

        Let’s say aggregate production functions do exist, but that the actual aggregate production function is not Cobb-Douglas. It’s CES or translog or some other functional form. There’s constant returns to scale at the aggregate level so the income identity holds (that’s not even necessary except for the purposes of some measurement issues). For sake of argument, let’s say that the actual share of labor in output is 2/3 and the elasticity of substitution between labor and capital is 1/4. If I estimate a CD relationship with this data I will get a coefficient on ln L that’s >1 (in absolute value). Which would imply that the share of labor in income is greater than one, an impossibility. Right there, I know that the production function cannot be Cobb-Douglas. So it’s simply not true that a good fit for a Cobb-Douglas production function can always be obtained because there’s some income identity involved. (and if you think this is a contrived example, try and estimate some production functions for pre Industrial economies or for low income countries today).

        It might be true that production functions don’t exist, but that whole HUMBUG PF argument does not show that in the least bit and it is a pointless distraction in the argument.

  1. Yes, The Cambridge Capital Controversies Do Matter | Hesa.us
Follow

Get every new post delivered to your Inbox.

Join 950 other followers