Posts Tagged Economic theory

The Crisis & Economics, Part 6: “Oh, You Just Mean Macro”

This is part 6 in my series on how the financial crisis is relevant for economics (here are parts 1, 2, 34 & 5). Each part explores an argument economists have made against the charge that the crisis exposed fundamental failings of their discipline. This post discusses the argument that the ‘crisis in economics’ is confined only to macroeconomics, which is actually minority, so attacking all of economics is wrongheaded.

Argument #6: “Sure, modern macroeconomics is pretty weak. But most economists don’t even work on macro, so they are unaffected.”

Quite a lot of economists consider the debate about the financial crisis irrelevant to what they do. After all, why should a crisis at the macro level invalidate econometrics, game theory or auction theory? Attacking these fields and others for the recession is like blaming mechanical engineers for a bridge collapse. In fact, many economists hold macro in the same (low) esteem as the public: Daniel Hamermesh goes so far as to claim that “most of what the macro guys do in academia is just worthless rubbish”, but adds that the kind of field he works in “has contributed tremendously and continues to contribute”. Even the discipline’s most vehement defenders are willing to concede macroeconomics is bunk.

There is a considerable amount of truth to this view. While there may be critiques of all areas in economics, the claim that the financial crisis is what’s thrown them into disrepute is a non sequitur. Critics should therefore be careful to distinguish macroeconomists from their colleagues when (rightly) dismissing the former’s failure to deal with the crisis. Nevertheless, there are two major ways in which the failings of macroeconomics are symptomatic of more general problems with economic theory, so the discipline as a whole cannot be let off the hook.

The first is a lack of holism. A large amount of economic theories are built in an abstract theoretical vacuum, with little reference to what is happening around the individual agent. But the importance of the macroeconomy for behaviour in specific sectors or by specific actors cannot be ignored. For example, if you drop the macroeconomic assumption of full employment, this affects theories in areas from public goods provision to labour markets to Walrasian equilibrium. Consumers’ and firms’ expectations are strongly informed by the macroeconomic and political environment around them. Considering the effects of political institutions such as unions on the labour market, but ignoring their broader political role, can create narrow and misguided conclusions about their efficacy. New Institutional economics often takes ‘institutions’ as exogenous, failing to consider to two-way interaction between institutions and agents. The in-vogue ‘Randomised Control Trial’ restricts the economic environment to such a degree that it’s questionable whether one can generalise the results at all. And so forth.

Don’t get me wrong: there is an obvious case for different areas of economics being separate from one another: taking certain parameters as exogenous to look at a certain area, and using different tools for different areas. But even the most specialised fields should never forget the broader scope and context of their ideas, and this should be reflected in the theoretical approach. Thomas Piketty’s Capital is a shining example of how to intertwine theory, history, statistics and politics to build a better understanding of capitalism. Another is the attempt by ecological economists to place the economy in its environmental context, rather than simply taking resource endowments as a given and assuming pollution just sort of…disappears, save for its monetary cost. Minsky’s Financial Instability Hypothesis shows one way to make an effective link between the behaviour of investors and broader economic performance, integrating finance and macroeconomics. Overspecialisation may cause economists to miss these key insights.

The second issue is that many of the problems with macroeconomics can be applied to, or are relevant for, other areas of the discipline. One of the key complaints about macroeconomics – that it relies on microfoundations – is a problem precisely because it imports unrealistic assumptions about economic behaviour from microeconomics. The problem of having an abundance of abstract models, each seeking to explain one or two ‘things’, but with no real way to tell which model is applicable and when, applies not just to macroeconomics but also to behavioural economics, microeconomics, oligopoly theory. Endogenous money, which is central to macroeconomists’ lack of understanding of the crisis, also has major implications for finance. To reuse my above analogy, you might well be concerned about mechanical engineers after a bridge collapse if they largely relied on the same methods used by the civil engineers.

Your average economist is probably right to point out that the public’s ire should be focused not on them, but on macroeconomics. However, this doesn’t mean that they are immune from the serious questions the crisis raised about the methodology, assumptions and ethics of the field. It’s a case-by-case matter which areas are impacted and by how much, but any attempt to box off macroeconomic theory entirely should be resisted. There’s plenty of room for fruitful debate about all areas of economic theory, much of which will benefit from being informed by the shortcomings of economic theory as exposed by the financial crisis.

, ,

6 Comments

The Crisis & Economics, Part 5: “Shhh! We’re Working On It”

This is part 5 in my series on how the financial crisis is relevant for economics (parts 1, 2, 3 & 4 are here). Each part explores an argument economists have made against the charge that the crisis exposed fundamental failings of their discipline. This post explores the possibility that macroeconomics, even if it failed before the crisis, has responded to its critics and is moving forward.

Argument #5: “We got this one wrong, sure, but we’ve made (or are making) progress in macroeconomics, so there’s no need for a fundamental rethink.”

Many macroeconomists deserve credit for their mea culpa and subsequent refocus following the financial crisis. Nevertheless, the nature of the rethink, particularly the unwillingness to abandon certain modelling techniques and ideas, leads me to question whether progress can be made without a more fundamental upheaval. To see why, it will help to have a brief overview of how macro models work.

In macroeconomic models, the optimisation of agents means that economic outcomes such as prices, quantities, wages and rents adjust to the conditions imposed by input parameters such as preferences, technology and demographics. A consequence of this is that sustained inefficiency, unemployment and other chaotic behaviour usually occur when something ‘gets in the way’ of this adjustment. Hence economists introduce ad hoc modifications such as sticky prices, shocks and transaction costs to generate sub-optimal behaviour: for example, if a firm’s cost of changing prices exceeds the benefit, prices will not be changed and the outcome will not be Pareto efficient. Since there are countless ways in which the world ‘deviates’ from the perfectly competitive baseline, it’s mathematically troublesome (or impossible) to include every possible friction. The result is that macroeconomists tend to decide which frictions are important based on real world experience: since the crisis, the focus has been on finance. On the surface this sounds fine – who isn’t for informing our models with experience? However, it is my contention that this approach does not offer us any more understanding than would experience alone.

Perhaps an analogy will illustrate this better. I was once walking past a field of cows as it began to rain, and I noticed some of them start to sit down. It occurred to me that there was no use them doing this after the storm started; they are supposed to give us adequate warning by sitting down before it happens. Sitting down during a storm is just telling us what we already know. Similarly, although the models used by economists and policy makers did not predict and could not account for the crisis before it happened, they have since built models that try to do so. They generally do this by attributing the crisis to frictions that revealed themselves to be important during the crisis. Ex post, a friction can always be found to make models behave a certain way, but the models do not make identifying the source of problems before they happen any easier, and they don’t add much afterwards, either – we certainly didn’t need economists to tell us finance was important following 2008. In other words, when a storm comes, macroeconomists promptly sit down and declare that they’ve solved the problem of understanding storms.  It becomes difficult to escape the circularity of defining the relevant friction by its outcome, hence stripping the idea of ‘frictions’ of predictive power or falsifiability.

There is also the open question of whether understanding the impact of a ‘friction’ relative to a perfectly competitive baseline entails understanding its impact in the real world. As theorists from Joe Stiglitz to Yanis Varoufakis have argued, neoclassical economics is trapped in a permanent fight against indeterminacy: the quest to understand things relative to a perfectly competitive, microfounded baseline leads to aggregation problems and intractable complexities that, if included, result in “anything goes” conclusions. To put in another way, the real world is so complex and full of frictions that whichever mechanics would be driving the perfectly competitive model are swamped. The actions of individual agents are so intertwined that their aggregate behaviour cannot be predicted from each of their ‘objective functions’. Subsequently, our knowledge of the real world must be informed by either models which use different methodologies or, more crucially, by historical experience.

Finally, the ad hoc approach also contradicts another key aspect of contemporary macroeconomics: microfoundations. The typical justification for these is that, to use the words of the ECB, they impose “theoretical discipline” and are “less subject to the Lucas critique” than a simple VAR, Old Keynesian model or another more aggregative framework. Yet even if we take those propositions to be true, the modifications and frictions that are so crucial to making the models more realistic are often not microfounded, sometimes taking the form of entirely arbitrary, exogenous constraints. Even worse is when the mechanism is profoundly unrealistic, such as prices being sticky because firms are randomly unable to change them for some reason. In other words, macroeconomics starts by sacrificing realism in the name of rigour, but reality forces it in the opposite direction, and the end result is that it has neither.

Macroeconomists may well defend their approach as just a ‘story telling‘ approach, from which they can draw lessons but which isn’t meant to hold in the same manner as engineering theory. Perhaps this is defensible in itself, but (a) personally, I’d hope for better and (b) in practice, this seems to mean each economists can pick and choose whichever story they want to tell based on their prior political beliefs. If macroeconomists are content conversing in mathematical fables, they should keep these conversations to themselves and refrain from forecasting or using them to inform policy. Until then, I’ll rely on macroeconomic frameworks which are less mathematically ‘sophisticated’, but which generate ex ante predictions that cover a wide range of observations, and which do not rely on the invocation of special frictions to explain persistent deviations from these predictions.

, , , ,

12 Comments

The Illusion of Mathematical Certainty

Nate Silver’s questionable foray into predicting World Cup results got me thinking about the limitations of maths in economics (and the social sciences in general). I generally stay out of this discussion because it’s completely overdone, but I’d like to rebut a popular defence of mathematics in economics that I don’t often see challenged. It goes something like this:

Everyone has assumptions implicit in the way they view the world. Mathematics allows economists to state our assumptions clearly and make sure our conclusions follow from our premises so we can avoid fuzzy thinking.

I do not believe this argument stands on its own terms. A fuzzy concept does not become any less fuzzy when you attach an algebraic label to it and stick it into an equation with other fuzzy concepts to which you’ve attached algebraic labels (a commenter on Noah Smith’s blog provided a great example of this by mathematising Freud’s Oedipus complex and pointing out it was still nonsense). Similarly, absurd assumptions do not become any less absurd when they are stated clearly and transparently, and especially not when any actual criticism of these assumptions is brushed off the grounds that “all models are simplifications“.

Furthermore, I’m not convinced that using mathematics actually brings implicit assumptions out into the open. I can’t count the amount of times that I’ve seen people invoke demand-supply without understanding that it is built on the assumption of perfect competition (and refusing to acknowledge this point when challenged). The social world is inescapably complex, so there are an overwhelming variety of assumptions built into any type of model, theory or argument that tries to understand it. These assumptions generally remain unstated until somebody who is thinking about an issue – with or without mathematics – comes along and points out their importance.

For example, consider Michael Sandel’s point that economic theory assumes the value or characteristics of commodities are independent of their price and sale, and once you realise this is unrealistic (for example with sex), you come to different conclusions about markets. Or Robert Prasch’s point that economic theory assumes there is a price at which all commodities will be preferred to one another, which implies that at some price you’d substitute beer for your dying sister’s healthcare*. Or William Lazonick’s point that economic theory presumes labour productivity to be innate and transferable, whereas many organisations these days benefit from moulding their employees’ skills to be organisation specific. I could go on, but the point is that economic theory remains full of implicit assumptions. Understanding and modifying these is a neverending battle that mathematics does not come close to solving.

Let me stress that I am not arguing against the use of mathematics; I’m arguing against using gratuitous, bad mathematics as a substitute for interesting and relevant thinking. If we wish to use mathematics properly, it is not enough to express properties algebraically; we have to define the units in which these properties are measured. No matter how logical mathematics makes your theory appear, if the properties of key parameters are poorly defined, they will not balance mathematically and the theory will be logical nonsense. Furthermore, it has to be demonstrated that the maths is used to come to new, falsifiable conclusions, rather than rationalising things we already know. Finally, it should never be presumed that stating a theory mathematically somehow guards that theory against fuzzy thinking, poor logic or unstated assumptions. There is no reason to believe it is a priori desirable to use mathematics to state a theory or explore an issue, as some economists seem to think.

*This has a name in economics: the axiom of gross substitution. However, it often goes unstated or at least underexplored: for example, these two popular microeconomics texts do not mention it all.

, , ,

21 Comments

Should Libertarians Embrace ‘Left’-Heterodox Economics?

Recent posts by Noah Smith, David Henderson and Daniel Kuehn on the relationship between economics, ‘free markets’ and policy in general got me thinking about libertarians and how accepting they should be of marginalist economics, as well as how open they should be to non-marginalist alternatives. It seems to me there is an unspoken bond between marginalist economics and libertarianism (even Austrianism shares some major features with neoclassical economics), and so there may be a tendency for libertarians to have strong priors against post-Keynesian, Sraffian, Marxist, Behavioural, Ecological and other types of economics that dispute this general framework.

Let me note that I’m not accusing libertarians of being generally hostile to heterodox economics – I’m sure there are some who are and some who aren’t. Instead, I’m just warning against such a possibility, and offering some heterodox ideas to which libertarians might be receptive.

Behavioural/post-Keynesian consumer theory: behavioural economics sometimes elicits rebukes from libertarians, as it seems to imply that ordinary people are not able to make decisions rationally, and therefore that  policy makers should help them along their way. Naturally, libertarians object to this idea, questioning the experimental methods of behavioural economics, pointing out that policy makers are themselves imperfect, and so forth. I’m not going to comment on the efficacy of these arguments here – sometimes they are fair, sometimes less so. Instead, what I want to point out is that while some behavioural economics implies a role for activist policy, it’s not necessarily the case that a view of consumers which differs from the optimising agent renders the agent somehow irrational and therefore ripe for intervention.

One such example is a version of the mental accounting model, used in post-Keynesian consumer theory, in which consumers organise their budget into categories before making spending decisions. Consumers will not spend money in one category until they have had their needs in a more ‘basic’ or ‘fundamental’ category satisfied, which creates a Maslow-esque hierarchy of spending – starting with necessities such as food & shelter and culminating in yachts & modern art. This means relative price changes do not have as much of an impact on the type of goods bought as implied by the utility maximising model; instead, the amount spent on different types of goods is primarily determined by the consumer’s level of income.

On first inspection, this might seem to imply a tirade against the efficacy of the price system for coordinating preferences and scarcity, as well as a comment on the ‘wastefulness’ of inequality (and perhaps it could be interpreted as such). However, this doesn’t necessarily make the theory generally ‘anti-libertarian’. In fact, one major implication is that placing high taxes on something low in someone’s hierarchy will not have much impact on their spending, and hence ‘sin taxes’ – which are a major expense for the poor – will not reduce their consumption of  alcohol/smoking substantially; instead, these things will simply take up more and more of their income (which is pretty consistent with available evidence). This implies that paternalistic tax policies aimed at the poor will generally fail to achieve their aims.

The Market for Lemons (TML): George Akerlof’s famous paper explored information asymmetry, using used car markets as its primary example. Akerlof was trying to understand what buyers do when they face a product of unknown quality, and argued that since they are unsure, they will only be willing to bid the average expected value of a car in the market. However, if the seller is selling a ‘good’ car, its value will be above this average, so the seller will not sell it at the price the buyer offers. The result is that the best sellers drop out of the market, creating a cumulative process which results in the market unravelling completely.

Though theoretically neat and compelling, this ‘seminal’ example of market failure has always struck me as incredibly weak, for the simple reason that used car markets do not actually fall apart. Why? Maybe people aren’t rational maximisers etc etc (for example, in another nod to behavioural economics, it may be that buyers’ irrational overconfidence leads them to go ahead with a purchase, even if it’s statistically likely they’ll get a ‘lemon’). Ultimately, though, I’d argue the answer is that capitalism – or if you prefer, ‘the market’ – is a network of historically contingent institutions and social interactions, rather than abstract individuals trading in a vacuum where outcomes are mathematically knowable. The reason used car markets work ‘despite’ information asymmetry is due to hard-to-establish trust and norms between buyers and sellers, and due to intermediaries such as auto trader, who spring up to help both sides avoid being ripped off. I’ve not seen anyone provide an example of the process TML outlines actually occurring, so I don’t see why it adds to our understanding of markets.

To be fair to Austrians, they have been talking about ‘the market as a social process’ for a long time, and in places have disputed the Lemons Model on similar grounds to the above. Hence they have something in common with old institutionalists, Marxists (to a degree) and perhaps even hard-to-place heterodox economists like Tony Lawson, who argues economics should primarily be a historical, rather than mathematical subject. To put it another way, while heterodox economists typically advocate a move away from marginalist economics to understand why capitalism doesn’t work, such a move may also be necessarily to understand why it does.

Mark up Pricing: Post-Keynesian, Sraffian and Institutionalist economics typically subscribe to the cost-plus theory of prices, which states that businesses set prices at their average cost per unit, plus a mark up. Furthermore, they avoid price changes where possible, preferring to keep their prices stable for long periods of time to yield a target rate of profit, varying quantity rather than price, and keeping spare capacity and stocks so that they can do so. The problem libertarians might have with this is that it implies prices are somewhat arbitrary, do not usually ‘clear’ markets, and do not adjust to the preferences of consumers especially smoothly. However, while these things may be true, they do not mean mark up pricing comes with no benefits.

In my opinion, one such benefit is stability: I’m glad I can rely on prices only changing every so often, and that if there are a lot of people at the hairdressers he doesn’t raise the price to ‘clear’ the market. Furthermore, the fact that firms keep buffer stocks and can adjust quantity instead of price allows them to deal with uncertainty and unexpected demand more easily, making them more adaptable to real world conditions than if they always squeezed every drop out of their existing capacity. While I’m not going to pretend post-Keynesian pricing theory doesn’t imply some anti-libertarian policies (particularly with regards to price regulation), but it’s certainly not a one sided idea, and its policy implications are open to further interpretation.

I generally prefer to refrain from immediately linking everything to policy as I have done above, because, well, there are enough people doing that. However, the examples I’ve given actually help to demonstrate a point about the relationship between economic analysis and policy: theories with premises that seem to imply a certain policy may not once you’ve followed them through to their conclusions. What’s more, the same analysis can seem to imply different policies from different perspectives (at its most extreme, Austrian Business Cycle Theory seems to imply that even a teensy regulation will send capitalism off the rails, which could be interpreted as a damning criticism if you were a leftist). This means calls for pluralism in economics should be embraced by all, even if on the surface some ‘alternatives’ to mainstream economics seem to conflict with one’s world view.

, , ,

16 Comments

Pieria: How Not to Do Macroeconomics, Part II

I have  new post on Pieria, following up on mainstream macro and secular stagnation. The beginning is a restatement of my critique of EM/a response to Simon Wren-Lewis, but the main nub of the post is (hopefully) a more constructive effort at macroeconomics, from a heterodox perspective:

There are two major heterodox theories which help to understand both the 2008 crisis and the so-called period of ‘secular stagnation’ before and after it happened: Karl Marx’s Tendency of the Rate of Profit to Fall (TRPF), and Hyman Minsky’s Financial Instability Hypothesis (FIH)I expect that neither of these would qualify as ‘precise’ or ‘rigorous’ enough for mainstream economists – and I’ve no doubt the mere mention of Marx will have some reaching for the Black Book of Communism – but the models are relatively simple, offer an understanding of key mechanisms and also make empirically testable predictions. What’s more, they do not merely isolate abstract mechanisms, but form a general explanation of the trends in the global economy over the past few decades (both individually, but even moreso when combined). Marx’s declining RoP serves as a material underpinning for why secular stagnation and financialisation get started, while Minsky’s FIH offers an excellent description of how they evolve.

I have two points that I wanted to add, but thought they would clog up the main post:

First, in my previous post, I referenced Stock-Flow Consistent models as one promising future avenue for fully-fledged macroeconomic modelling, a successor to DSGE. Other candidates might include Agent-Based Modelling, models in econophysics or Steve Keen’s systems dynamics approach. However, let me say that – as far as I’m aware – none of these approaches yet reach the kind of level I’m asking of them. I endorse them on the basis that they have more realistic foundations, and have had fewer intellectual resources poured into them than macroeconomic models, so they warrant further exploration. But for now, I believe macroeconomics should walk before it can run: clearly stated, falsifiable theories, which lean on maths where needed but do not insist on using it no matter what, are better than elaborate, precisely stated theories which are so abstract it’s hard to determine how they are relevant at all, let alone falsify them.

Second, these are just two examples, coloured no doubt by my affiliation with what you might call left-heterodox schools of thought. However, I’m sure Austrian economics is quite compatible with the idea of secular stagnation, since their theory centres around how credit expansion and/or low interest rates cause a misallocation of investment, resulting in unsustainable bubbles. I leave it to those more knowledgeable about Austrian economics than me to explore this in detail.

 

, , , , , ,

4 Comments

How Not to Do Macroeconomics

A frustrating recurrence for critics of ‘mainstream’ economics is the assertion that they are criticising the economics of bygone days: that those phenomena which they assert economists do not consider are, in fact, at the forefront of economics research, and that the critics’ ignorance demonstrates that they are out of touch with modern economics – and therefore not fit to criticise it at all.

Nowhere is this more apparent than with macroeconomics. Macroeconomists are commonly accused of failing to incorporate dynamics in the financial sector such as debt, bubbles and even banks themselves, but while this was true pre-crisis, many contemporary macroeconomic models do attempt to include such things. Reputed economist Thomas Sargent charged that such criticisms “reflect either woeful ignorance or intentional disregard for what much of modern macroeconomics is about and what it has accomplished.” So what has it accomplished? One attempt to model the ongoing crisis using modern macro is this recent paper by Gauti Eggertsson & Neil Mehrotra, which tries to understand secular stagnation within a typical ‘overlapping generations’ framework. It’s quite a simple model, deliberately so, but it helps to illustrate the troubles faced by contemporary macroeconomics.

The model

The model has only 3 types of agents: young, middle-aged and old. The young borrow from the middle, who receive an income, some of which they save for old age. Predictably, the model employs all the standard techniques that heterodox economists love to hate, such as utility maximisation and perfect foresight. However, the interesting mechanics here are not in these; instead, what concerns me is the way ‘secular stagnation’ itself is introduced. In the model, the limit to how much young agents are allowed to borrow is exogenously imposed, and deleveraging/a financial crisis begins when this amount falls for unspecified reasons. In other words, in order to analyse deleveraging, Eggertson & Mehrotra simply assume that it happens, without asking why. As David Beckworth noted on twitter, this is simply assuming what you want to prove. (They go on to show similar effects can occur due to a fall in population growth or an increase in inequality, but again, these changes are modelled as exogenous).

It gets worse. Recall that the idea of secular stagnation is, at heart, a story about how over the last few decades we have not been able to create enough demand with ‘real’ investment, and have subsequently relied on speculative bubbles to push demand to an acceptable level. This was certainly the angle from which Larry Summers and subsequent commentators approached the issue. It’s therefore surprising – ridiculous, in fact – that this model of secular stagnation doesn’t include banks, and has only one financial instrument: a risk-less bond that agents use to transfer wealth between generations. What’s more, as the authors state, “no aggregate savings is possible (i.e. there is no capital)”. Yes, you read that right. How on earth can our model understand why there is not enough ‘traditional’ investment (i.e. capital formation), and why we need bubbles to fill that gap, if we can have neither investment nor bubbles?

Naturally, none of these shortcomings stop Eggertson & Mehrotra from proceeding, and ending the paper in economists’ favourite way…policy prescriptions! Yes, despite the fact that this model is not only unrealistic but quite clearly unfit for purpose on its own terms, and despite the fact that it has yielded no falsifiable predictions (?), the authors go on give policy advice about redistribution, monetary and fiscal policy. Considering this paper is incomprehensible to most of the public, one is forced to wonder to whom this policy advice is accountable. Note that I am not implying policymakers are puppets on the strings of macroeconomists, but things like this definitely contribute to debate – after all, secular stagnation was referenced by the Chancellor in UK parliament (though admittedly he did reject it). Furthermore, when you have economists with a platform like Paul Krugman endorsing the model, it’s hard to argue that it couldn’t have at least some degree of influence on policy-makers.

Now, I don’t want to make general comments solely on the basis of this paper: after all, the authors themselves admit it is only a starting point. However, some of the problems I’ve highlighted here are not uncommon in macro: a small number of agents on whom some rather arbitrary assumptions are imposed to create loosely realistic mechanics, an unexplained ‘shock’ used to create a crisis. This is true of the earlier, similar paper by Eggertson & Krugman, which tries to model debt-deflation using two types of agents: ‘patient’ agents, who save, and ‘impatient agents’, who borrow. Once more, deleveraging begins when the exogenously imposed constraint on the patient agent’s borrowing falls For Some Reason, and differences in the agents’ respective consumption levels reduce aggregate demand as the debt is paid back. Again, there are no banks, no investment and no real financial sector. Similarly, even the far more sophisticated Markus K. Brunnermeier & Yuliy Sannikov – which actually includes investment and a financial sector – still only has two agents, and relies on exogenous shocks to drive the economy away from its steady-state.

Whither macroeconomics?

Why do so many models seem to share these characteristics? Well, perhaps thanks to the Lucas Critique, macroeconomic models must be built up from optimising agents. Since modelling human behaviour is inconceivably complex, mathematical tractability forces economists to make important parameters exogenous, and to limit the number (or number of types) of agents in the model, as well as these agents’ goals & motivations. Complicated utility functions which allow for fairly common properties like relative status effects, or different levels of risk aversion at different incomes, may be possible to explore in isolation, but they’re not generalisable to every case or the models become impossible to solve/indeterminate. The result is that a model which tries to explore something like secular stagnation can end up being highly stylised, to the point of missing the most important mechanics altogether. It will also be unable to incorporate other well-known developments from elsewhere in the field.

This is why I’d prefer something like Stock-Flow Consistent models, which focus on accounting relations and flows of funds, to be the norm in macroeconomics. As economists know all too well, all models abstract from some things, and when we are talking about big, systemic problems, it’s not particularly important whether Maria’s level of consumption is satisfying a utility function. What’s important is how money and resources move around: where they come from, and how they are split – on aggregate – between investment, consumption, financial speculation and so forth. This type of methodology can help understand how the financial sector might create bubbles; or why deficits grow and shrink; or how government expenditure impacts investment. What’s more, it will help us understand all of these aspects of the economy at the same time. We will not have an overwhelming number of models, each highlighting one particular mechanic, with no ex ante way of selecting between them, but one or a small number of generalisable models which can account for a large number of important phenomena.

Finally, to return to the opening paragraph, this paper may help to illustrate a lesson for both economists and their critics. The problem is not that economists are not aware of or never try to model issue x, y or z. Instead, it’s that when they do consider x, y or z, they do so in an inappropriate way, shoehorning problems into a reductionist, marginalist framework, and likely making some of the most important working parts exogenous. For example, while critics might charge that economists ignore mark-up pricing, the real problem is that when economists do include mark-up pricing, the mark-up is over marginal rather than average cost, which is not what firms actually do. While critics might charge that economists pay insufficient attention to institutions, a more accurate critique is that when economists include institutions, they are generally considered as exogenous costs or constraints, without any two-way interaction between agents and institutions. While it’s unfair to say economists have not done work that relaxes rational expectations, the way they do so still leaves agents pretty damn rational by most peoples’ standards. And so on.

However, the specific examples are not important. It seems increasingly clear that economists’ methodology, while it is at least superficially capable of including everything from behavioural economics to culture to finance, severely limits their ability to engage with certain types of questions. If you want to understand the impact of a small labour market reform, or how auctions work, or design a new market, existing economic theory (and econometrics) is the place to go. On the other hand, if you want to understand development, historical analysis has a lot more to offer than abstract theory. If you want to understand how firms work, you’re better off with survey evidence and case studies (in fairness, economists themselves have been moving some way in this direction with Industrial Organisation, although if you ask me oligopoly theory has many of the same problems as macro) than marginalism. And if you want to understand macroeconomics and finance, you have to abandon the obsession with individual agents and zoom out to look at the bigger picture. Otherwise you’ll just end up with an extremely narrow model that proves little except its own existence.

 

, , , ,

25 Comments

Yes, The Cambridge Capital Controversies Matter

I rarely (never) post based solely on a quick thought or quote, but this just struck me as too good not to highlight. It’s from a book called ‘Capital as Power’ by Jonathan Nitzan and Shimshon Bichler, which challenges both the neoclassical and Marxian conceptions of capital, and is freely available online. The passage in question pertains to the way neoclassical economics has dealt with the problems highlighted during the well documented Cambridge Capital Controversies:

The first and most common solution has been to gloss the problem over – or, better still, to ignore it altogether. And as Robinson (1971) predicted and Hodgson (1997) confirmed, so far this solution seems to be working. Most economics textbooks, including the endless editions of Samuelson, Inc., continue to ‘measure’ capital as if the Cambridge Controversy had never happened, helping keep the majority of economists – teachers and students – blissfully unaware of the whole debacle.

A second, more subtle method has been to argue that the problem of quantifying capital, although serious in principle, has limited practical importance (Ferguson 1969). However, given the excessively unrealistic if not impossible assumptions of neoclassical theory, resting its defence on real-world relevance seems somewhat audacious.

The second point is something I independently noticed: appealing to practicality when it suits the modeller, but insisting it doesn’t matter elsewhere. If there is solid evidence that reswitching isn’t important, that’s fine, but then we should also take on board that agents don’t optimise, markets don’t clear, expectations aren’t rational, etc. etc. If we do that, pretty soon the assumptions all fall away and not much is left.

However, it’s the authors’ third point that really hits home:

The third and probably most sophisticated response has been to embrace disaggregate general equilibrium models. The latter models try to describe – conceptually, that is – every aspect of the economic system, down to the smallest detail. The production function in such models separately specifies each individual input, however tiny, so the need to aggregate capital goods into capital does not arise in the first place.

General equilibrium models have serious theoretical and empirical weaknesses whose details have attracted much attention. Their most important problem, though, comes not from what they try to explain, but from what they ignore, namely capital. Their emphasis on disaggregation, regardless of its epistemological feasibility, is an ontological fallacy. The social process takes place not at the level of atoms or strings, but of social institutions and organizations. And so, although the ‘shell’ called capital may or may not consist of individual physical inputs, its existence and significance as the central social aggregate of capitalism is hardly in doubt. By ignoring this pivotal concept, general equilibrium theory turns itself into a hollow formality.

In essence, neoclassical economics dealt with its inability to model capital by…eschewing any analysis of capital. However, the theoretical importance of capital for understanding capitalism (duh) means that this has turned neoclassical ‘theory’ into a highly inadequate took for doing what theory is supposed to do, which is to further our understanding.

Apparently, if you keep evading logical, methodological and empirical problems, it catches up with you! Who knew?

, , ,

63 Comments

Teaching Economics? Start with Key Contested Ideas

How economics is taught has been the subject of a lot of debate recently. Although there have been a lot of good points made, in my opinion Andrew Lainton‘s recent blog post hits the nail on the head: we need to begin economics education with a discussion of key, contested ideas.

Starting with contested ideas has a few major benefits. First, it immediately shows students what economics is: a subject where there is a lot of disagreement, and where key ideas are often not well understood, even by the best. Second, it allows students to grapple with the kinds of critical questions that, in my experience, people generally have in mind when they think of ‘economics': where do growth, profits come from? How do things ‘work’? Third, it allows us to intertwine the teaching of these concepts with economic history and the history of thought.

Lainton’s key contested idea is savings: how naive national accounting might make you believe that saving instantly create investment; how Kalecki and Keynes showed that it’s closer to the other way around; and onto modern debates that add nuances to these simplified expositions. Naturally, this would also tie in with debates about the banking system, loanable funds and endogenous versus exogenous money. On top of ‘savings’, I can think of quite a few other important economic ideas that are not agreed upon, but are central to the discipline:

Decision making and expectations

How do people make decisions? This question is clearly central to economics, as any economic model that explicitly includes agents must make some assumption about what drives these agents’ decisions. In modern economics, an agent’s decision rule generally rests on seeking some form of ‘gain’, whether subjective satisfaction or simply units of money. Economists themselves have also, to their credit, pushed behavioural and even neurological investigations into decision making. However, much of this has yet to filter down to the main models/courses, even though it should really be at the forefront of economic modelling.

All too often, the most mathematically tractable models such as utility maximisation and rational expectations are simply assumed, perhaps with caveats, but not with any real discussion of whether they represent human behaviour. Well established psychological characteristics and behavioural heuristics/biases are ignored, even though they may alter the analysis of choice in fundamental ways. Public officials are often assumed to follow behaviour that creates their personally preferred outcome, despite important evidence to the contrary. It is assumed the public understands the fundamentals of the economy, even though a lot of evidence suggests this is way, way off. Decisions in the workplace that concern morale, hierarchy and norms are often disregarded, despite evidence that they are of utmost importance.

However, my point isn’t necessarily about which models are right or wrong. It’s that these debates about how people act, and based on which motives and expectations, are not only incredibly interesting but are incredibly important. Such debates could also tie in with a comprehensive discussion of the Lucas Critique – not as a binary phenomenon that can be solved with microfoundations, but as an ongoing problem that requires us to evaluate the way the parameters of the economy change over time and with policy, culture and so forth. This would allow students to see how the economy evolves, and how its behaviour depends on fundamental questions about human behaviour.

Value

Theories of value underlie economic theories, whether economists like it or not –  in fact, it’s pretty difficult (impossible?) to judge the “performance” of the economy without a theory of value. Classical economics was built on the Labour Theory of Value (LTV), and distinguished between the price of an object (exchange-value) and its value to whomever used it (use-value). Marginalist economics is built on the Subjective Theory of Value (STV), which tends to combine use and exchange value into mathematically ordered preferences. GDP calculations simply measure ‘value added’ as a monetary quantity. There are also other, albeit less popular, theories of value, such as those based on agriculture and energy.

A crucial point here is that the concept of ‘value’ is not necessarily well-defined, and each theory of value generally has something slightly different in mind when they use it. For the (Marxist) LTV,value refers to an objective quality: the total productive ‘value’ in the economy, which is expressed as an exchange relationship between commodities, and originates solely from labour. For the STV , value refers to the subjective ‘surplus’ gained from transactions, which neoclassical theory seeks to optimise to maximise social welfare. For theories of value based on the natural sciences, value refers to more physical qualities, such as how energy is transformed in production and the limits to this process. However, the common ground between theories is the question of how we create more than we had – and what to do about it.

I expect a lot of economists would regard the STV as largely obvious and not up for debate, but if it’s so obvious and important that’s even more reason to study it explicitly – after all, Newton’s Law’s are not tucked away underneath classical physics: they are explicit, and their empirical relevance is frequently demonstrated to students. Clearly, we can’t demonstrate the empirical relevance of a theory of value (hey, it’s almost as if economics is not a science!) but we can discuss it in depth and how it is  a relevant and necessary backdrop to formulating theories about utility, surplus and profit.

What is economics?

It’s a testament to how contested the field of economics is that even the definition is not agreed upon. Open a ‘pop‘ economics book and you’ll find a definition such as “the study of how people respond to incentives”. Another popular mainstream definition is “the allocation of scarce resources” or even “satisfying unlimited wants with scarce resources“. Classical economics – and more recently, Sraffians – considers economics the study of how society reproduces itself. Austrians might give you a definition that says something about human action and the market system.  The definition given by Wikipedia is “the study of production, distribution and consumption”. I’m sure there are many more out there.

Agreeing on a definition of economics would put the discipline on surer footing. Right now it occupies a space where it is simultaneously used as an all encompassing worldview, and as a very narrow toolkit that only investigates one or two things at a time (I expect many economists would basically consider themselves applied statisticians or econometricians). I sometimes even find that economists fall back on defining economics by “what economists do”, which is a rather weak (and circular) definition. Given that we are not even sure which problems economic theories are designed to understand and solve, is it any wonder people can’t agree on which ones to use?

This post is by no means exhaustive. Off the top of my head, some other relevant contested ideas might be: capital; money; how to measure the economy; different economic systems; institutions; policy and economists’ relationship with it. This kind of approach is surely better for furthering students’ understanding than simply teaching a set of abstract theories which are labelled ‘economics’, often with little critical engagement. It would open students’ minds to the kinds of difficult and relevant questions that are currently either shied away from, or only open to those who have completed an Economics PHD. I expect many would also leave with an understanding of economics closer to what students currently expect (and do not really get) from an economics education.

,

40 Comments

Pieria: How Conservative is Mainstream Economics?

I have a new article in Pieria, arguing that the image of mainstream economists as rabid free-marketeers is not entirely without foundation:

There is quite a disconnect between mainstream economics as seen in the public eye and as seen by economists themselves. A lot of media criticism of economics – and the Guardianseems to be going mad on this recently  – paints mainstream economic theory as supporting a ‘free market’ or ‘neoliberal’ worldview, possibly in cahoots with the elites, and largely unconcerned with human welfare. Economists tend to switch off in the face of such criticisms, arguing that the majority of them, along with their theories, do not support such policies…

…Yet I think there is a good  argument to be made, not that mainstream economics necessarily implies particular policies, but that it is easily utilised to push a certain worldview, based on which questions it asks and how the answers are modeled and presented. This worldview is what the public and journalists all too frequently encounter as ‘economics’, which is why they often conflate neoclassical with neoliberal ideas.

An interesting question – which I do not explore in the article, but have written about before, as has Peter Dorman – is the disparity between ‘econ101′ rhetoric and what economics actually implies. ‘Economics’ in the public image is generally used to justify counterintuitive or unpalatable ideas like the minimum wage and austerity, even though arguing unambiguously for them – particularly the latter – is a position that is actually quite ignorant of ‘economics’ as a field.

Do I blame economists for this? Partly: I think economists should be more worried about their public image, whereas you often get the impression they are more concerned with being enlightened technocrats than anything else. However, politicisation isn’t unique to economics (consider climate change denial or evolution/religion), so it’s a bit unfair to single out economists in that sense. Having said that, 99% of scientists in the former fields are united against the pseudo-scientific caricatures of them in the media, whereas economists are far less able to convey a clear message to the public. In short, perhaps economists should figure things out amongst themselves before they rattle off lists of policy proposals based on their models.

Anyway, enjoy the piece!

, , ,

15 Comments

How Economics Sees Reality

Something has been bothering me about the way evidence is (sometimes) used in economics and econometrics:  theories are assumed throughout interpretation of the data. The result is that it’s hard to end up questioning the model being used.

Let me give some examples. The delightful fellas at econjobrumours once disputed my argument that supply curves are flat or slope downward by noting that, yes, Virginia, in conditions where firms have market power (high demand, drought pricing) prices tend to go up. Apparently this “simple, empirical point” suffices to refute the idea that supply curves do anything but slope upward. But this is not true. After all, “supply curves slope downward/upward/wiggle around all over the place” is not an empirical statement. It is an interpretation of empirical evidence that also hinges on the relevance of the theoretical concept of the supply curve itself. In fact, the evidence, taken as whole, actually suggests that the demand-supply framework is at best incomplete.

This is because we have two major pieces of evidence on this matter: higher demand/more market power increases price, and firms face constant or increasing returns to scale. These are contradictory when interpreted within the demand-supply framework, as they imply that the supply curve slopes in different directions. However, if we used a different model – say, added a third term for ‘market power’, or a Kaleckian cost plus model, where the mark up was a function of the “degree of monopoly”, that would no longer be the case. The rising supply curve rests on the idea that increasing prices reflect increasing costs, and therefore cannot incorporate these possibilities.

Similarly, many empirical econometric papers use the neoclassical production function, (recent one here) which states that output is derived from the labour and capital, plus a few parameters attached to the variables, as a way to interpret the data. However, this again requires that we assume capital and labour, and the parameters attached to them, are meaningful, and that the data reflect their properties rather than something else. For example, the volume of labour employed moving a certain way only implies something about the ‘elasticity of substitution’ (the rate at which firms substitute between labour and capital) if you assume that there is an elasticity of substitution. However, the real-world ‘lumpiness‘ of production may mean this is not the case, at least not in the smooth, differentiable way assumed by neoclassical theory.

Assuming such concepts when looking at data means that economics can become a game of ‘label the residual‘, despite the various problems associated with the variables, concepts and parameters used. Indeed, Anwar Shaikh once pointed out that the seeming consistency between the Cobb-Douglas production function and the data was essentially tautological, and so using the function to interpret any data, even the word “humbug” on a graph, would seem to confirm the propositions of the theory, simply because they follow directly from the way it is set up.

Joan Robinson made this basic point, albeit more strongly, concerning utility functions: we assume people are optimising utility, then fit whatever behaviour we observe into said utility function. In other words, we risk making the entire exercise “impregnably circular” (unless we extract some falsifiable propositions from it, that is). Frances Wooley’s admittedly self-indulgent playing around with utility functions and the concept of paternalism seems to demonstrate this point nicely.

Now, this problem is, to a certain extent, observed in all sciences – we must assume ‘mass’ is a meaningful concept to use Newton’s Laws, and so forth. However, in economics, properties are much harder to pin down, and so it seems to me that we must be more careful when making statements about them. Plus, in the murky world of statistics, we can lose sight of the fact that we are merely making tautological statements or running into problems of causality.

The economist might now ask how we would even begin to interpret the medley of data at our disposal without theory. Well, to make another tired science analogy, the advancement of science has often not resulted from superior ‘predictions’, but on identifying a closer representation of how the world works: the go-to example of this is Ptolemy, which made superior predictions to its rival but was still wrong. My answer is therefore the same as it has always been: economists need to make better use of case studies and experiments. If we find out what’s actually going on underneath the data, we can use this to establish causal connections before interpreting it. This way, we can avoid problems of circularity, tautologies, and of trapping ourselves within a particular model.

, , , ,

38 Comments

Follow

Get every new post delivered to your Inbox.

Join 1,046 other followers