Archive for July, 2012

A Couple of Criticisms of Libertarianism

I have a couple of thoughts on libertarianism that I can’t manage to squeeze a whole post out of. So, well, here they are.

Institutionalised law breaking

A major problem I have with the (minarchist) libertarian approach to law enforcement is that it fails to take repeated and systemic violation of laws into account. Libertarians, generally speaking, think that the state should prevent ‘force, theft and fraud‘, but they don’t seem to think this through: these three things are incredibly pervasive and do not only occur as isolated incidents that can be prosecuted on a case-by-case basis. When discussing problems with capitalism, libertarians seem to presuppose a virtually infallible police state where all the problems with regulatory capture melt away and any violations of these three are ‘outside’ the libertarian ideal.

The libertarian blind spot on this point can be seen in Milton Friedman’s view on corporations. Corporations have no social responsibility, except to maximise profit whilst ‘playing by the rules.’  But Friedman failed to realise that, like the regulations he disapproved of, corporations are happy to work around whichever ‘rules of the game’ happen to be in place. Moral considerations tend to melt away under competitive conditions, when things become ‘just business.’ Corporations have long history of forcefraud and theft, and as abstract entities these things simply don’t factor into their considerations. In a system based on private accumulation, they will use their profits to corrupt the legal system, hijack public funds, get the best lawyers, and make their operations as opaque as possible to avoid prosecution, no matter the charge. None of this is a bug of capitalism; it is a feature.

Fraud in particular is an incredibly common phenomenon, and characteristic of any market system – even grocery stores regularly mislabel products to trick consumers into buying more than they otherwise would. At a higher level, there are occurrences like the LIBOR scandal and general fraud surrounding the crisis. Furthermore, the Leveson Inquiry has revealed quite how many resources society has to pour into uncovering past wrongdoing by corporations. It is far more sensible to advocate various transparency standards and requirements that prevent these things from happening in the first place.

The consequence of this is that many regulatory agencies are actually compatible with libertarian aims for what is needed for a functioning market economy. Libertarian counters about regulatory capture simply beg questions about capitalism itself, questions which they surely don’t want to get into.

Governments versus markets, yet again

All too often, I see libertarians respond to a purported problem with markets by saying ‘well government has that problem, too.’ But this is a superficial treatment that can be used as a cookie cutter for any issue, without actually exploring it.

Sometimes we might identify a problem and ask how the government can alleviate it – e.g. information asymmetry can be partially dealt with by various transparency standards. However, more often the correct debate is not ‘x is a problem, what can government do about x’, but ‘x is a problem that causes y – what can government do about y?’

For example, The Radical Subjectivist asks what governments can do to eliminate uncertainty. The answer is: not a lot! Of course they can’t alter the fundamental fact that the future is unknowable. But this doesn’t really get us anywhere; what we really need to ask is what uncertainty leads to. And according to Keynes’ theories, it leads to a rate of interest that is too high to precipitate full employment; it also leads to the use of rules of thumb and waves of optimism and pessimism in financial markets. So policymakers should act to lower the rate of interest, and also stop trade when financial markets become too heated. Notice that at this point the issue we were originally discussing – uncertainty – has become largely irrelevant.

This can be seen particularly with libertarian economist’s reaction to behavioural economics. They respond by saying policymakers have the biases too (and the even more pathetic response that the people who study the biases also have them). But any real treatment of a particular bias will reveal that they create systemic problems that can be identified and remedied through alternative means – for example, Type 1 and Type 2 thinking apply to all people, but somebody who is using Type 1 thinking can easily be exploited by somebody using Type 2 thinking. This is a big problem when signing contracts and requires that people are protected when doing so. The same person who writes the law (and writes the book about the bias) will have the bias. But this doesn’t impede their ability to deal with it on a systemic level.

Of course, it is entirely possible that government cannot do anything about problem ‘y’, or that it would be too expensive, intrusive or what have you. It’s also true that policymakers themselves will suffer from certain biases that will affect their decisions making. But libertarians cannot dismiss every purported problem with markets by suggesting that it also applies to government – this does not engage the specific issue at all, but is a superficial attempt to escape important challenges to their reasoning.

, , , , ,

26 Comments

Debunking Economics, Part V: The Holy War Over Capital

There are probably few criticisms of neoclassical economics that have been both so universally acknowledged to be valid, and yet so completely ignored, as the Cambridge Capital Controversy (CCC). Chapter 7 of Steve Keen’s Debunking Economics provides an overview of this debate about the nature of capital.

Basic economic analysis teaches that capital, like other factors of production, is paid in proportion to its productivity – the so called ‘Marginal Product of Capital (MPC),’ which is presumed to be equal to the rate of profit. Keen gives two good criticisms before he delves fully into the CCC:

First, the MPC assumes that other factor inputs are fixed when capital is employed, which leads to our first problem: since capital is (rightly) assumed to be the least variable input, any time period in which you can employ more capital is surely one in which you can employ more labour, too? Once again we are forced to face the reality that firms tend to vary all inputs employed at once.

Second, in an industry as broadly defined as ‘the capital market’ we run into familiar Ceteris Paribus problems, where varying inputs will create effects on wages and the existing capital stock that alter the rate of profit. For small and medium sized firms these effects will be negligible, but when analysing the biggest firms and entire industries the feedback between them will create collateral effects that undermine partial equilibrium methodology.

However, even ignoring these criticisms, there are serious issues with the neoclassical treatment of capital.

Capital is often measured in units. There are obvious problems with this: capital includes brooms, blast furnaces, buckets, string and potentially any commodity you care to think of, so a single unit of measurement is difficult to justify. Generally economists either leave capital in undefined units or measure it by price. The former treatment does not deserve to be criticised formally – something that poorly defined is, like utility, Not Even Wrong. As for the latter, Keen notes that there is an “obvious circularity” to the definition. The value of capital is based on the expected profit from it, which is partly based on the price of capital. Thus the use of price as a unit of measurement is not particularly enlightening.

Piero Sraffa’s Devastating Critique of the Neoclassical Treatment of Capital

As always, Piero Sraffa offered the most fully fleshed out and devastating critique of the neoclassical theory.

Sraffa proposed that, instead of treating one factor of production as a mysterious substance called ‘capital,’ we instead supposed that goods produce other goods, when combined with labour (hence the title of his Magnum Opus, Production of Commodities by Means of Commodities). He rigorously derived an internally consistent model with the sole aim of invalidating neoclassical economics on its own terms. There is some debate about the empirical applicability of his conclusions, but logic is sufficient to invalidate the neoclassical theories, which are based on the same premises.

Sraffa builds up a complex model step by step, starting simple. In the first statement of the model, there are a few firms, whose only inputs are the goods produced by other firms and themselves.  So firm A needs a certain amount of commodities x, y and z to produce commodity x, whilst firm B needs a different combination to produce commodity y, and firm C a different combination to produce commodity z. Each firm produces just enough of their respective commodities for economy-wide production to continue at the same level in the next period. Sraffa’s next step is to alter the model so that each firm produces more than they need to in order to continue production – a profit.

The first conclusion he comes to is that the relative production of factor inputs,and the rate of profit, is not based on supply and demand, but on ‘the conditions of production’ – the amount of inputs required to keep a firm or industry going.

Sraffa then explicitly incorporates labour into his model. He notes that wages are obviously an inverse function of profit: the higher they are, the lower profit will have to be, and vice versa. He then proposes a new method of measuring capital: treat it as the dated value of the labour required to produce it (wages), plus the profit made from it since it was produced, plus the value of the commodity that was combined with the labour to produce it. This ‘residual commodity’ can then be further reduced to labour times profit, plus another commodity, and so forth:

commodity a = ((labour input at time x)*((1+rate of profit)^(time periods since time x))) + commodity b

As Sraffa himself points out, there will always be residual commodity left out if you break down a commodity into the labour and commodity required to create it. However, as you do this again and again, the resultant term becomes smaller and smaller until it can be negated. This type of reasoning is far more scientific than the neoclassical approach and actually closely resembles the perturbation methods used by mathematicians and engineers, where a function is split into an infinite amount of terms of decreasing value, but only the first few are used in calculations.

In the equation above, there are two competing effects: profits and wages. As one rises, the other must decrease. It is easy to see in this in equation that there is a peak value for capital somewhere in the middle; either side of this the reduction in one term will overwhelm the other and the measured value of capital will decrease.

This creates an interesting phenomenon known as capital reswitching. Consider two production techniques, A and B, which involve inputting different amounts of labour at different times – a common example is creating wine through ageing it (A) or through a chemical process (B). A requires more labour input in the distant past; B requires more labour input in the near past. At a zero rate of profit, both techniques are identical. As the rate of profit rises, technique A, which relies on more distant, fewer labour inputs, will remain cheaper and therefore more viable. However, as the effect of the rising rate of profit compounds due to the time delay, technique A will become more and more expensive, and technique B will take over.*

The point of this approach is to show a few things:

(1) The value of capital varies depending on the rate of profit, as the rate of profit is a variable in the equation for measuring capital. Since the measured amount of capital depends on the rate of profit, profit cannot simply be said to be the ‘Marginal Product of Capital.’

(2) There is no easy to discern relationship between profitability and the amount of capital employed. Generally, neoclassical economics teaches that output is simply a concave but increasing function of the amount of capital employed, much like any other demand/utility curve. Capital reswitching destroys this idea.

(3) We cannot calculate prices without first knowing the distribution between wages and profits. The measured price of inputs depends on income distribution, not the other way round.

Many might be struck by the sheer level of abstraction in Sraffa’s approach. It’s worth noting that in Commodities, he adds many more levels of realism past those that Keen explores. But, as I said before, the basic point was taking on neoclassicism with its own logic, rather than presenting an alternative. By the end of the debate, Samuelson and Solow had both conceded that the criticisms were valid, and their models were wrong or incomplete.

Discussions of the CCC since then have tended to assume the standard neoclassical tactic of asserting the objections have been incorporated. But this stuff was 50 years ago. Why do undergraduate and postgraduate programs still teach concepts like the MPC? The Solow-Swan growth model, which depends on an aggregated capital stock K, subject to diminishing returns? As Robert Vienneau says, if neoclassicism were really revising itself to the extent that’s needed, we’d expect some of the modifications to filter down over time. But the fact is that they haven’t.

In fact, what seems to have happened is that economists have done a fairly typical dance – weaving between ‘that is unimportant’ and ‘that has been incorporated:’

Aggregative models were deployed for the purposes of teaching and policymaking, while the Arrow-Debreu model became the retreat of neoclassical authors when questioned about the logical consistency of their models. In this response, a harsh tradeoff between logical consistency and relevance was cultivated in the very core of mainstream economics.

This sort of evasiveness is common – there will always be a paper written recently that attempts to shoehorn any objection one cares to think of into the neoclassical paradigm. But these objections are incorporated one at a time, rarely find their way into the core teachings, and never involve questioning the foundations of neoclassicism on any substantive level. The reality is that, when the problems are as deep as the ones highlighted in the CCC, we need a meaningful overhaul rather than mere ad-hoc modifications.

*For those interested, the linked Wikipedia article has a fairly simple numerical example where the most effective method goes from A to B and back to A again.

, , , , ,

38 Comments

Unlearning the History of Thought II

I’ve previously spoken about how many great insights supposedly ‘discovered’ by economists – classical and modern – had really been known for a long time, but had been ignored or perverted before they were put in terms neoclassical economists approved of. The more I learn, the more it seems that this is the case with a vast amount of critical ‘insights’ on which macroeconomists pride themselves.

The fact is that 1970s ‘anti-Keynesian revolution’ was really just a restatement of things already stated by Keynes and Phillips – who were incidentally two of the main targets of the revolution, but both completely misinterpreted.

For example, in keeping with Robert Waldmann’s hypothesis that there are few macroeconomic insights not in TGT, Keynes gave a description of RET in chapter 13:

The psychological time-preferences of an individual require two distinct sets of decisions to carry them out completely.  The first is concerned with that aspect of time-preference which I have called the propensity to consume, which, operating under the influence of the various motives set forth in Book III, determines for each individual how much he will reserve in some form of command over future consumption.

But this decision having been made, there is a further decision which awaits him, namely, in what form he will hold the command over future consumption which he has reserved, whether out of his current income or from previous savings.

Furthermore, the Lucas Critique itself – the idea that macroeconomic aggregates cannot be used as a guide to future policy because a change in policy will change behaviour and therefore relationships between variables – was also stated by Keynes:

There is first of all the central question of methodology—the logic of applying the method of multiple correlation to unanalysed economic material, which we know to be non-homogeneous through time. If we are dealing with the action of numerically measurable, independent forces, adequately analyzed so that we were dealing with independent atomic factors and between them completely comprehensive, acting with fluctuating relative strength on material constant and homogeneous through time, we might be able to use the method of multiple correlation with some confidence for disentangling the laws of their action…in fact we know that every one of these conditions is far from being satisfied by the economic material under investigation.

Even more ironically, it was stated by the primary target of Lucas’ criticisms, the much misunderstood Edmund Phillips:

In my view it cannot be too strongly stated that in attempting to control economic fluctuations we do not have two separate problems of estimating the system and controlling it, we have a single problem of jointly controlling and learning about the system, that is, a problem of learning control or adaptive control.

One might argue that Lucas deserves credit for formalising this point, but in reality I think Phillip’s one sentence formulation is better – it emphasises continual change and vigilance in recognising the feedback loop between policy and the real world. In contrast, it seems the Lucas Critique has become little more than a tool with which to cling to outdated methodology despite empirical falsification.

This is why I am frustrated when people like Krugman say that they “don’t care” what Keynes or others “really meant,” and people like Scott Sumner and Robert Lucas pay barely any attention at all to the history of thought. Ignoring the history of thought just means you are condemned to rediscover the same insights over and over – often, it seems, in a far less enlightened way than they were originally stated.

P.S. John Kenneth Galbraith, in The Affluent Society, also stated Steve Keen’s important point that private debt must accelerate in order to increase demand:

As we expand debt in the process of want creation, we come necessarily to depend on this expansion. An interruption in the increase in debt means an actual reduction in demand for goods.

In fairness to Keen, I wouldn’t paint him with the same brush as the above – he readily acknowledges that this insight was noted by Schumpeter and Minsky before him.

, , ,

9 Comments

Debunking Economics, Part IV: The Many Ways to Debunk Labour Economics

Chapter 5 of Steve Keen’s Debunking Economics takes a look at labour market economics, an area where I feel there are almost too many different criticisms to know where to start. Keen spends a relatively short amount of time on this chapter, and though his expositions are sufficient to question many of the standard stories economists tell us about labour, I feel he misses some low hanging fruit.

Keen primarily builds on his earlier approach to demand-supply analysis. He first notes that the relatively uncontroversial backward bending labour supply curve, when aggregated, can mean a labour supply curve can have any shape at all – this aggregation issue is not fully explored on economics courses. He then applies his earlier analysis of demand curves, which implies the same for labour demand. The definitive PHD textbook , Mas-Collell, has also noted the latter, but assumes it away by supposing that:

…there is a benevolent central authority, perhaps, that redistributes wealth in order to maximise social welfare…

Naturally, this assumption isn’t true in reality, and since it’s a domain assumption – for which conclusions only follow as long as it applies (more on this in a later post) – we can safely ignore it. The result is that there may be any number of equilibrium points in a labour market.

On top of this, there are many more points that make the clear cut story of a single equilibrium highly questionable:

  • The intertwined nature of work and leisure – the latter relying on the former if, as Keen puts it, you want to do anything more than sleep.
  • The existence of nominal debt contracts.
  • Keynes’ argument at the beginning of TGT, which is that workers cannot control their real wages, as it is dependent on price, which are controlled by their employers. Keen weaves around this argument at the beginning of the chapter but never states it explicitly.
  • The fact that wages are an essential component of aggregate demand, and, similarly, that the labour market is so broadly defined that treating demand and supply as independent is not possible.
  • The fact that, since markets do not approximate perfect competition, even standard neoclassical analysis teaches that minimum wages and unionisation can be beneficial to combat market power.

Clearly, there are a multitude of reasons that ‘interfering’ with wages through demand management, legislation and unionisation is not obviously a bad thing.

This is actually fairly uncontroversial stuff, and most neoclassical economists would endorse some of it. Having said that, many still think real wages should fall, so clearly haven’t fully thought the implications through.

But maybe Keen should have been more controversial. For me his criticisms don’t go deep enough – he repeatedly refers to reasons that “workers will not be paid their marginal productivity”, without questioning the concept itself.

As I have noted before, labour only has productivity when combined with capital. Shops assistants need a shop, tills, hangers and bags; builders need tools, machinery and materials. Furthermore, some labourers only have productivity when combined with other labourers, too. Two people carrying a heavy box cannot be said to have a discernible individual productivity. The productivity of a McDonald’s relies on the cooks, the till operators and the supervisors combined – take away a cook and what’s the point in having an extra till operator when the cooks can’t keep up with the orders?

The result is that productivity can only be said to be a result of combined, rather than individual, factors. The relative shares are then determined by bargaining power.

I consider this argument a strong one, and it also fits quite nicely with many of the Sraffian arguments throughout the book: that firms do not hold capital fixed and employ more labour – they need to employ both simultaneously; that much of neoclassical analysis cannot take place independent of income distribution, which is dependent on political power (more on this in a later chapter). For these reasons I’m guessing Keen is unaware of this particular criticism, rather than rejects it – it would significantly strengthen his critique, which, though sufficient to complicate the neoclassical story, does not completely ‘debunk’ it.

, , , ,

14 Comments

Debunking Economics, Part III: “Uninformed and Inexperienced Armchair Theorisers”

Chapter 5 of Steve Keen’s Debunking Economics explores the marginalist theory of the firm. Keen first channels Piero Sraffa’s 1926 criticisms, then catalogues the neoclassical theory’s complete lack of real world corroboration – as noted in my title, a businessperson once referred to it as “the product of the itching imaginations of uninformed and inexperienced arm-chair theorisers.”

The neoclassical theory of the firm supposes that, in the short run, firms face increasing marginal costs – their costs per unit (average cost) increase as they produce more. This occurs because in the short run, the ‘amount’ of capital (and land) employed is fixed, so producing more involves squeezing more and more out of machines with more labour. The intersection of these increasing costs with how much they can gain from selling more, or their ‘marginal revenue’, constrains their size.

This homogeneous treatment of capital should strike many as silly. The neoclassical theory effectively supposes that, if we employ 9 people to dig a ditch with 9 spades, employment of the tenth will split the 9 spades into 10 slightly smaller, worse spades. However, if new labour is employed, new capital is – must be – employed simultaneously, whether it is bought or if it is taken from previously idle capacity. A taxi driver cannot do anything without his taxi; an office worker without a computer is also fairly useless.

So increasing marginal costs are unlikely to be the case with individual firms or narrowly defined industries. As Keen puts it, “engineers purposely design factories to avoid the problems economists believe force production costs to rise.” In reality, firms have excess inventories and tend to vary capital, labour and land all at once, even in the short run. They therefore face roughly constant, or falling, returns to scale.

Sraffa pointed out that it’s only really valid to treat some factor inputs as fixed if we define an industry so broadly that the factors would have to be converted from other uses. For example, if we take agriculture, and assume the country is well populated and at or close to full employment, then it’s reasonable to treat land and machinery as fixed in the short term. However, since the theory of the firm assumes that supply and demand are independent and that one ‘industry’ can be studied apart from all others, another problem appears: this situation does not lend itself well to ceteris paribus analysis. Changing wages, supply costs, and the displacement of labour from other areas will have notable impacts on the rest of the economy, such that tinkering with our curves individually cannot be deemed a proper representation of what will happen.

There are a few cases where firms or industries might fall between these two categories, but really they are the exception.

Keen cites 150 empirical surveys that found firms reporting constant or falling average costs of production. In particular he cites Eiteman and Guthrie, who found that 95% of firms out of 334 did this, whilst only 1 chose the curve that looks like the one found in textbooks. Most firms also use cost-plus pricing, rather than taking marginal considerations into account, and adopt a form of trial and error when pricing.

A flat(ish) supply curve leads us to the incredibly interesting proposition, supported by the classical economists, that supply determines price while demand determines quantity. This is, of course, a simplification ,but appears to corroborate far better with the real world than neoclassical ‘simplifications.’

In my opinion this is the strongest case against neoclassical micro as taught. Jonathan Catalan can find no objections to this section, either, and gives the story an Austrian slant. Keen says that this problem has never really been addressed by economists, but ignored, despite the clear superiority of Sraffa’s logic and the corroboration of the empirical evidence with his approach. I find it hard to believe neoclassical economists can wiggle their way out of this problem, should they ever address it.

, , , , , ,

37 Comments

The Fundamental Difference Between Mainstream and Heterodox Economics

Simon Wren-Lewis discusses the large gap between mainstream and heterodox economics, and asks why the heterodox economists are so willing to throw out almost every aspect of neoclassical theory. Allow me to offer an explanation.

The reason heterodox economists remain dissatisfied with mainstream economics, no matter how many modifications the latter adds to its core framework, is that there is always an implication that, in the absence of various real world ‘frictions’, the economy would function like a smoothly oiled machine. That is: assuming perfect information, mobility, ‘small’ firms, no unions, flexible prices/wages and so forth, the economy would achieve full employment, with near perfect utilisation of resources, and stay there, perhaps buffeted by mild external shocks.

New Keynesians and New Classicals sometimes act like bitter rivals, but mainly they only differ on which ‘frictions’ should be present or not (this is an oversimplification of the disagreement, of course). The original New Classical models started with economies that are always in equilibrium, preferences are constant, and competition is perfect. New Keynesian models add imperfect competition, sticky prices, transaction costs and so forth. The newest papers go further and add heterogeneous agents (which generally means two), changing preferences, and other ‘frictions.’ However, it is assumed that if the economy were rid some specific features/characteristics, it would function similarly to one of the core Walrasian or Arrow-Debreu style formulations.

So is it not true that real world mechanics prevent things from going as smoothly as they might do in absence of those mechanics? Well, partially. But according to heterodox economists, capitalism has inherent tendencies to crisis, unemployment and misallocation anyway.

A key example of where this is evident is finance. Generally the mainstream analyses of why finance is unstable focus on irrationality, imperfect information, externalities and other such modifications. If only everyone had access to information, if transactions were cost less, and if people were rational self maximisers, then finance would be stable.

Minskyites, on the other hand, argue that this isn’t the real problem. Even if the economy starts stable, the resultant strong returns on investments will cause capitalists/investors to take more risk. This process will continue and the economy will endogenously destabilise itself as higher returns are sought and more risk is taken on, until eventually the capacity to make a return on these risks is outrun and we face a collapse. There is no need to invoke a specific ‘friction’ for this process to occur.*

Another prominent example is the labour market. Generally, economists presume that without ‘search costs’, oversized firms/unions and sticky wages, the economy would achieve full employment. But heterodox economists disagree on a number of counts: the Marginal Value Product Theory is faulty, so higher wages will not necessarily cause unemployment to rise; wages are also an essential component of aggregate demand, so reducing them may well be counterproductive. In fact, Keynes argued that sticky wages were far from a barrier to full employment; they actually stabilised aggregate demand. Steve Keen’s model also produces less severe business cycles when sticky wages and prices are added.

So the reason heterodox economists want to throw the proverbial baby out with the bath water (and also redecorate the bathroom and possibly even move house, or something), is that they think the core of mainstream economics has dug itself too deep into a ditch. The inevitable ad hoc modifications of ‘perfect’ models sometimes have so many ‘frictions’ introduced that the supposed ‘deep’ mechanics that underlie them become questionable. But they are still never abandoned. Heterodox economics is not just about adding a few real world mechanics here and there; it’s about throwing out the entire core and starting over.

*It could be said that this might not occur if Knightian uncertainty were not a factor in the real world, but I think calling this a ‘friction’ jumps the gap between friction and fundamental reality.

, , , , ,

29 Comments

Why Does Capital Have More Bargaining Power Than Labour?

The debate over libertarianism and the workplace (if you can call it a ‘debate’, when libertarians make responses like this, here is a summary of what Cowen and Tabarrok are saying) seems like as good a time as any to post on the bargaining power relationship between labour and capital.

I have posted before about how the idea that wages are determined by productivity is indefensible; capital and labour only have productivity when combined, so it is impossible to separate their relative contributions, which are instead determined by bargaining power. As Daniel Kuehn also notes, a ‘job’ is generally what is bargained over, rather than specific aspects. So it would not be unreasonable to say that working conditions, hours and pay are generally all determined by bargaining power, though not separately. It is also not unreasonable to say that employers generally have the edge in this. But why?

The first reason, noted by Paul Rosenberg, is that labour requires wages to subsist every day, whereas those sitting on capital can produce for themselves. This means that labour’s situation is generally more urgent than capital’s. Now, libertarians might respond that people can save money, inherit money, and so forth. But this begs a lot of questions: what if you are born poor? Where do you get your savings from initially, if not wages?

Libertarians also might respond, as the BHL libertarians have, by advocating a universal income (something that strikes me as trying to make the world behave like an economics textbook, where workers can smoothly trade off leisure for work, from 0 hours to 24). This would indeed improve labour’s bargaining power. However, it is also the case that, even under this system, many workers would incur obligations such as debts, families, and of course social obligations, that require money. Whether these people ‘choose’ to do this is irrelevant: what we are asking is if, at the moment somebody tries to get a job, they have more bargaining power than their employer.

The second reason is that employers are fewer than employees, making the latter more readily substitutable, particularly in low skilled jobs. This starts from the obvious observation that not everyone can be a capitalist. Since wages tend to be consumed, but profits don’t, it is fair to say that an increase in the amount of capitalists over workers will reduce consumption and therefore available profits. This will result in capitalists going bankrupt. Obviously, if there are too few capitalists then opportunities will also open up, and we will go in the other direction.

It is reasonable to conclude that there is a rough ratio of capital to labour around which the economy oscillates, something similar to what Phillips was actually saying with his ‘curve.’ Capitalism generally finds it hard to deal with true full employment, as it diminishes the capital available for investment. This results in lay offs, and diminishing bargaining power for labour. Historically, capitalism appears to spend a lot more time in period of unemployment than periods of full employment.

There is the final point that under modern capitalism, labour is free to organise and create collective bargaining power. However, in the absence of legislation to assist this, unionisation falls into all the familiar problems with collective action, problems that capital doesn’t have: coordination, aligning different interests, the incentive for individual members to cheat. This is reflected by the fact that countries with strong unions generally have legislative support of those unions, too.

Obviously I’ve been assuming that neither capital nor labour ‘hijack’ the state to further their own interests (questions over whether capitalism is a system characterised by capital’s hijacking of the state aside), but I don’t think it’s necessary to invoke these to understand why labour often seems to be on the losing side of the bargain, particularly for low skilled workers.

Bringing it back to the debate over libertarianism and the workplace, it’s worth noting that ‘voluntary’ versus ‘coerced’ is not a binary distinction but a spectrum, with one end representing virtually no costs for choosing something different, whilst the other represents death/torture. In between you can have anything from walking down the road to another shop to social pressure to moving country – all are costs of not taking a particular choice, and hence reduce the ‘voluntariness’ of the decision itself. If employers generally have more bargaining power, this is a reflection that the costs of them choosing another employee (or no employee at all) are lower than the costs of the employee taking another job (or no job at all). This means the spectrum is tilted further away from ‘voluntary’ for the labourers, and the mere axiom that they have agreed to it so it’s OK will not suffice.

,

29 Comments

‘Debunking Economics’, Part II: Perfect Competition, Profit Maximisation and Non-Existent Supply Curves

The second chapter of Steve Keen’s Debunking Economics explores a number of arguments: the incoherence of perfect competition; that idea that equating Marginal Cost (MC) to Marginal Revenue (MR) does not maximise profit; and, eventually, that a supply curve cannot be derived. I will offer a brief summary of each of these arguments, but I encourage further reading (Keen’s book being the obvious candidate), as this is complicated stuff and a blog post can only serve as an introduction and overview.

Perfect Competition

Keen’s first point is that, under perfect competition, the demand curve for an individual firm is not horizontal, as taught in economics textbooks, but the same as the market demand curve. Analysis of perfect competition makes the basic mistake of confusing infinitesimally small firms with firms whose size is 0 – in other words, it says that a market demand curve can be split into an infinitely small amount of flat demand curves. However, if you add up any number of flat demand curves, the result will be a flat demand curve, not a sloped one. Therefore the demand curve for any individual firm must be sloped, however shallow the slope is.

Again, it was a neoclassical economist, George Stigler, who discovered this flaw in perfect competition. Stigler’s argument is that since, by assumption, firms do not react to each other’s strategies, any change in output by a firm will change market output by the same amount, and hence affect price. This means the demand curve for an individual firm cannot be horizontal (a change in output does not affect price), but must be the same as the market demand curve (a change in output changes industry output by the same amount).

Keen’s discussion of perfect competition also notes something obvious – the use of the word ‘perfect’ is obviously value laden, despite economist’s claims that their science is value free. I’d add that whilst ‘perfect’ may have a specific definition, the broader value judgement of ‘competition is good’ is undeniably present in economics.

Marginal Cost =/= Marginal Revenue

Keen goes on to argue that the basic neoclassical theory of the firm, that firms maximise profit where Marginal Cost (the cost of producing one extra unit) equals Marginal Revenue (the revenue received from selling one extra unit) is incorrect. This is because it is vulnerable to a fallacy of composition – whilst it is rational for individual firms (at least according to neoclassical principles), it is collectively irrational for an industry, and will result in firms losing money if they all pursue it as a strategy.

The neoclassical profit maximising formula only focuses on the effect of changes in a firm’s own output on revenue, ignoring the impact of changes in industry. Whilst perfect competition assumes firms do not change their output in response to one another, the above result shows that industry output will change by the same amount as a firm’s output. If I’ve interpreted Keen correctly, the reduction in price resulting from this industry change (assumed away by neoclassical theory) is missing from neoclassical formula, so revenue will be lower than predicted, and equating MC and MR will yield a loss.

Is there a supply curve?

It is well known that a supply curve can only be derived for a perfectly competitive market – if firms are price takers. However, once firms have some market power, price cannot be taken as a given, because if they produce more (less), the price will fall (rise), meaning MR and demand diverge. Once this happens, MC will equal MR and not price; a change in price will not cause a firm to move smoothly along its MC curve (which is the supply curve), but instead will depend on MC, MR and demand.

As Keen notes, this explains why economists have been so keen to cling to perfect competition, with its blatant lack of real world corroboration and its seeming incoherence. However, Keen’s own arguments suggest that even under perfect competition, firms have some impact on industry output, and MC cannot be equated to price without making a loss on some sales. Therefore, unless individual firms behave irrationally – something that is obviously contrary to core tenets of economic theory – a supply curve cannot be derived as taught in economics textbooks.

I had a hard time getting my head around this but eventually became convinced of Keen’s arguments. Once you take into account a perfectly competitive firm’s own impact on industry output, the standard analysis of MC = MR breaks down and all the problems with deriving a supply curve, previously assumed away by economic theory via perfect competition, return. This bears something of a resemblance to the problems with demand curves, which were well-known but assumed away by Hicksian demand functions, only to return once you introduced more than one consumer.

The general impression Keen has given me so far is that economic theory is disturbingly aware of its own flaws.

, , , ,

18 Comments

Follow

Get every new post delivered to your Inbox.

Join 1,038 other followers