## The DSGE Dance

Something about the way economists construct their models doesn’t sit right.

Economic models are often acknowledged to be unrealistic, and Friedmanite ‘assumptions don’t matter‘ style arguments are used to justify this approach. The result is that internal mechanics aren’t really closely examined. However, when it suits them, economists are prepared to hold up internal mechanics to empirical verification – usually in order to preserve key properties and mathematical relevance. The result is that models are constructed in such a way that, instead of trying to explain how the economy works, they deliberately avoid both difficult empirical and difficult logical questions. This is particularly noticeable with the Dynamic Stochastic General Equilibrium (DSGE) models that are commonly employed in macroeconomics.

Here’s a brief overview of how DSGE models work: the economy is assumed to consist of various optimising agents: firms, households, a central bank and so forth. The behaviour of these agents is specified by a system of equations, which is then solved to give the time path of the economy: inflation, unemployment, growth and so forth. Agents usually have rational expectations, and goods markets tend to clear (supply equals demand), though various ‘frictions’ may get in the way of this. Each DSGE model will usually focus on one or two ‘frictions’ to try and isolate key causal links in the economy.

Let me also say that I am approaching this issue tentatively, as I in no way claim to have an in depth understanding of the mathematics used in DSGE models. But then, this isn’t really the issue: if somebody objects to utility as a concept, they don’t need to be able to solve a consumer optimisation problem; if someone objects to the idea that technology shocks cause recessions, they don’t need to be able to solve an RBC model. To use a tired analogy, I know nothing of the maths of epicycles, but I know it is an inaccurate description of planetary rotation. While there is every possibility I’m wrong about the DSGE approach, that possibility doesn’t rest on the mathematics.

Perverse properties?

DSGE has been around for a while, and along the line several ‘conundrums’ or inconsistencies have been discovered that could potentially undermine the approach. There are two main examples of this, both of which have similar implications: the possibility of multiple equilibria and therefore indeterminacy. I’ll go over them briefly, although won’t get into the details.

The first example is the Sonnenschein-Mandel-Debreu (SMD) Theorem. Broadly speaking, this states that although we can derive strictly downward sloping demand curves from individually optimising agents, once we aggregate up to the whole economy, the interaction between agents and resultant emergent properties mean that demand curves could have any shape. This creates the possibility of multiple equilibria, so logically the system could end up in any number of places. The SMD condition is sometimes known as the ‘anything goes’ theorem, as it implies that an economy in general equilibrium could potentially exhibit all sorts of behaviour.

The second example is capital reswitching, the possibility of which was demonstrated by Piero Sraffa in his Magnum opus Production of Commodities by Means of CommoditiesThe basic lesson is that the value of capital changes as the distribution (between profits and wages) changes, which means that one method of production can be profitable at both low and high rates of interest, while another is profitable in between. This is in contrast to the neoclassical approach, which suggests that the capital invested will increase (decrease) as the interest rate decreases (increases). The result is a non-linear relationship, and therefore the possibility of multiple equilibria.

That these issues could potentially cause problems is well known, but economists don’t see it as a problem. Here is an anonymous quote on the matter:

We’ve known for a long time one can construct GE models with perverse properties, but the logical possibility speaks nothing about empirical relevance. All these criticisms prove is that we cannot guarantee some properties hold a priori – but that’s not what we claim anyway, since we’re real economists, not austrian charlatans. Chanting that sole logical possibility of counterexamples by itself destroys large portions of economic theory is just idiotic.

As it happens, I agree: based on available evidence, neither reswitching nor the SMD theorem are empirically relevant. For everyday goods, it is reasonable to suppose that demand will rise as price falls, and vice versa.  Firms also rarely switch their techniques in the real world (though reswitching isn’t the main takeaway of the capital debates). So the perspective expressed above seems reasonable – that is, until we stop and consider the nature of DSGE models as a whole.

For the fact is that DSGE models themselves are not “empirically relevant”. They assume that agents are optimising, that markets tend to clear, that the economy is an equilibrium time path. They use ‘log linearisation’, a method which doesn’t even pretend to do anything other make the equations easier to solve by forcibly eliminating the possibility of multiple equilibria. On top of this, they generally display poor empirical corroboration. Overall, the DSGE approach is structured toward preserving the use of microfoundations, while at the same time invoking various – often unrealistic – processes in order to generate something resembling dynamic behaviour.

Economists tacitly acknowledge this, as they will usually say that they use this type of model to highlight one or two key mechanics, rather than to attempt to build a comprehensive model of the economy. Ask an economist if people really maximise utility; if the economy is in equilibrium; if markets clear, and they will likely answer “no, but it’s a simplification, designed to highlight problem x”. Yet when questioned about some of the more surreal logical consequences of all of the ‘simplifications’ made, economists will appeal to the real world. This is not a coherent perspective.

Some methodology

Neoclassical economics uses an ‘axiomatic deductive‘ approach, attempting to logically deduce theories from basic axioms about individual choice under scarcity. Economists have a stock of reasons to do this: it is ‘rigorous’; it bases models on policy invariant parameters; it incorporates the fact that the economy ultimately consists of agents consciously making decisions, etc. If you were to suggest internal mechanics based on simple empirical observations, conventional macroeconomists would likely reject your approach.

Modern DSGE models are constructed using these types of axioms, in such a way that they avoid logical conundrums like SMD conditions and reswitching. This allows macroeconomists to draw clear mathematical implications from their models, while the assumptions are justified on the grounds of empiricism: crazily shaped demand curves and technique switching are not often observed, so we’ll leave them out. Yet the model as a whole has very little to do with empiricism, and economists rarely claim otherwise. What we end up with is a clearly unrealistic model, constructed not in the name of empirical relevance or logical consistency, but in the name of preserving key conclusions and mathematical tractability. How exactly can we say this type of modelling informs us about how the economy works? This selective methodology has all the marks of Imre Lakatos’ degenerative research program.

A consequence of this methodological ‘dance’ is that it can be difficult to draw conclusions about which DSGE models are potentially sound. One example of this came from the blogosphere, via Noah Smith. Though Noah has previously criticised DSGE models, he recently noted – approvingly – that there exists a DSGE model that is quite consistent with the behaviour of key economic variables during the financial crisis. This increased my respect for DSGE somewhat, but my immediate conclusion still wasn’t “great! That model is my new mainstay”. After all, so many DSGE models exist that it’s highly probable that some simplistic curve fitting would make one seem plausible. Instead, I was concerned with what’s going on under the bonnet of the model – is it representative of the actual behaviour of the economy?

Sadly, the answer is no. Said DSGE model includes many unrealistic mechanics: most of the key behaviour appears to be determined by exogenous ‘shocks’  to risk, investment, productivity etc without any explanation. This includes the oft-mocked ‘Calvo fairy’, which imitates sticky prices by assigning a probability to firms randomly changing their prices at any given point. Presumably, this behaviour is justified on the grounds that all models are unrealistic in one way or another. But if we have constructed the model to avoid key problems – such as SMD and reswitching, or by log-linearising it – on the grounds that the problems are unrealistic, how can we justify using something as blatantly unrealistic as the Calvo fairy? Either we shed a harsh light on all internal mechanics, or on none.

Hence, even though the shoe superficially fits this DSGE model, I know that I’d be incredibly reluctant to use it if I were working at a Central Bank. This is one of the reasons why I think Steve Keen’s model – which Noah Smith has chastised – is superior: it may not exhibit behaviour that closely mirrors the path of the global economy from 2008-12, but it exhibits similar volatility, and the internal mechanics match up far more nicely than many (every?) neoclassical model. It seems to me that understanding key indicators and causal mechanisms is a far more modest, and credible, claim than being able to predict the quarter-by-quarter movement of GDP. Again, if I were ‘in charge’, I’d take the basic Keensian lesson that private debt is key to understanding crises over DSGE any day.

I am aware that DSGE and macro are only a small part of economics, and many economists agree that DSGE – at least in its current form – is yielding no fruit (although these same economists may still be hostile to outside criticism). Nevertheless, I wonder if this problem extends to other areas of economics, as economists can sometimes seem less concerned with explaining economic phenomena than with utilising their preferred approach. I believe internal mechanics are important, and if economists agree, they should expose every aspect of their theories to empirical verification, rather merely those areas which will protect their core conclusions.

1. #1 by Paul Schächterle on July 6, 2013 - 2:36 pm

Well, DSGE models only claim to preserve “micro foundations”, but in fact they do not, because of the Sonnenschein-Mantel-Debreu findings. If you can’t aggregate individual demand functions without losing the properties individual demand functions are supposed to have, well then your collective demand function is NOT based on micro assumptions but on an ad-hoc assumption that is not grounded on anything.
Btw. the assumption that the individual demand curve is necessarily downward sloping is also false. It is based on an A1 logical fallacy. Neoclassical theory can only “prove” that for the so called substitution effect. So they are basically saying: Demand will rise if the price (the unit price) falls as long as income or total utility remains the same. Well guess what, if a unit price changes your real income changes!
Neoclassical theory is so bad, it really isn’t funny.

• #2 by BFWR on July 6, 2013 - 7:30 pm

Sonnenschein-Mantel-Debreu assumes that economists have discovered the most basic destabilizing factor for the economy. The only problem is….they haven’t. The economy/productive system/production itself is radically unstable. It requires a change, that is, an addition to the consumer paradigms of loan ONLY and of financing for production ONLY. These additions not only would make the economy workable in an ongoing fashion they would in fact transform the system itself. This only goes to show that the real problem with economics and finance is….their difficulties in recognizing the necessity of their own transformation….and transcendence of their current limited paradigms.

• #3 by BFWR on July 6, 2013 - 7:32 pm

P = In < Pr

The act of production itself equals a scarcity of total individual incomes to total prices.

• #4 by Unlearningecon on July 7, 2013 - 2:13 pm

then your collective demand function is NOT based on micro assumptions but on an ad-hoc assumption that is not grounded on anything.

Nicely put – basically the point I was trying to make. Neoclassical economists claim to follow a strict logical path from axioms to conclusions, but the reality is that there are axioms and conclusions which rarely change, then a bunch of ad hoc assumptions in between to sweep all the problems under the rug.

Neoclassical theory can only “prove” that for the so called substitution effect. So they are basically saying: Demand will rise if the price (the unit price) falls as long as income or total utility remains the same. Well guess what, if a unit price changes your real income changes!

Milton Friedman hilariously attempted to justify only using the Hicksian demand as it has “an unambiguously testable implication”. Obviously this is circular, based on the assumption that an “unambiguously testable implication” is somehow the appropriate test of a scientific theory. Even more importantly, and as Mark Blaug points out in that link, the Hicksian demand is not observable.

2. #5 by TACJ on July 6, 2013 - 4:41 pm

This is one of the reasons why I think Steve Keen’s model – which Noah Smith has chastised – is superior: it may not exhibit behaviour that closely mirrors the path of the global economy from 2008-12, but it exhibits similar volatility, and the internal mechanics match up far more nicely than many (every?) neoclassical model.

I agree. One of the points I was tempted to make in response to Noah’s snarking was that Keen’s model is qualitative. If we accept that the economy is a chaotic system (i.e. deterministic but impossible to predict) then models ought to try to capture the quality of the economy, rather than attempt to model it in a linear* system and then use that system to make predictions**.

So as I understand it, his model gave Keen a qualitative appreciation for the fact that increasing levels of debt can lead to a crisis. He then used a bunch of other arguments and made his prediction, which not only anticipated the crisis, but anticipated how the crisis happened.

Prediction is a mugs game. Physicists understand why they can’t predict the future, why do economists continue with this hubris?

* Is DSGE a linear model? Incidentally, can you or anyone else point me to a straightforward, mathematical presentation of a ‘standard’ DSGE model? Such things are surprisingly difficult to find.

** Again: my view is that instead of trying to ‘predict’ crises policymakers should instead build institutions that are robust against crises when they occur. An example of such a robust institution might be the creation of stronger Keynesian ‘automatic stabilisers’ through more generous automatic unemployment benefits.

• #6 by BFWR on July 6, 2013 - 7:00 pm

“Again: my view is that instead of trying to ‘predict’ crises policymakers should instead build institutions that are robust against crises when they occur. An example of such a robust institution might be the creation of stronger Keynesian ‘automatic stabilisers’ through more generous automatic unemployment benefits.”

Precisely. The reforms which came out of the Great Depression are incomplete. They are indeed the major reason why the present depression is less socially disruptive than was the first. Time, which is change, marches on. We must supplement INDIVIDUAL incomes which are inherently scarce by the act of production itself and increasingly made scarce by technological innovation. That sets BOTH individuals AND the system free. Doing this completes the reforms of the first Great Depression and actually transforms the system itself by incorporating the concept of monetary (financial) Grace into it. This of course is anathema to Banking which derives its profits from only loans. Banking is a legitimate business plan, but it does not have the right to dominate as it does now, and it also does not have the right to force a single paradigm of loan on us which ultimately makes the system onerous and unworkable.

• #7 by Unlearningecon on July 7, 2013 - 2:05 pm

Yes, exactly. There is a huge difference between predicting an event with oracle-style accuracy and simply saying ‘hey, that part of the system is unsustainable, watch out!’

*No, but DSGE models are often fudged into linear equations through the ‘log linearisation’ method I mention. This makes them easier to solve and usually makes sure that the solution has only one equilibrium.

• #8 by Jan on July 9, 2013 - 2:29 am

Great,article Unlearning!Well here you have a piece on the most rabid ,proponent of “Predictions” Milton F .since you mention him.His record was not that splendid to put it mildly!Have a nice summer!

• #9 by Jan on July 9, 2013 - 2:16 am

“In August 1972, a case study of the methodology of neoclassical economics by Imre Lakatos’s London School of Economics colleague Spiro Latsis published in The British Journal for the Philosophy of Science found Milton Friedman’s methodology to be ‘pseudo-scientific’ in terms of Lakatos’s evaluative philosophy of science, according to which the demarcation between scientific and pseudo-scientific theories consists of their at least predicting testable empirical novel facts or not. -in Situational Determinism in Economics S.J Latsis The British Journal for the Philosophy of Science, 23, p 207-45.”
Three years later, in 1976, Friedman was awarded the Nobel Prize for Economics
Milton Friedman´s whole methodology and his result was critised early by many,highly reputably economist.Not only Cambridge keynsians as Nicholas Kaldor,Joan Robinson etc.also many americans like James Tobin,Robert Solow and Paul Samuelson was highly critical on many levels of Friedman´s work.
Friedman argued that only the evidence, not the plausibility of the assumptions, should decide about the validity of a theory. He used to say that “assumptions don’t matter”. In fact, he preferred theories with seemingly unrealistic assumptions – an attitude that Paul Krugman describe in Peddling Prosperity, 1994 in this way: “I think it is fair to say that up until the late 1960s Friedman and his followers, while influential, were regarded by many of their colleagues as faintly disreputable.” economic professor Edward Herman,writes in Triumph of the Market (1995), p. 36. that “Friedman’s methodology in attempting to prove his models have set a new standard in opportunism, manipulation, and the abuse of scientific method.”

Friedman argued that only the evidence, not the plausibility of the assumptions, should decide about the validity of a theory. He used to say that “assumptions don’t matter”. In fact, he preferred theories with seemingly unrealistic assumptions – an attitude that Paul Krugman describe in Peddling Prosperity in this way: “I think it is fair to say that up until the late 1960s Friedman and his followers, while influential, were regarded by many of their colleagues as faintly disreputable.” economic professor Edward Herman,writes in Triumph of the Market (1995), p. 36. that “Friedman’s methodology in attempting to prove his models have set a new standard in opportunism, manipulation, and the abuse of scientific method.”
It seems that Friedman´s most pathbreaking innovation as an economist has been in the art of what is called “massaging the data” to arrive at preferred conclusions.
In July 1970 Nicholas Kaldor wrote an article in the Lloyd’s Bank Review in which he questioned Friedman’s empirical assertions. Friedman replied — in the same journal in October — that:
“Asking how Professor Kaldor would explain the existence of essentially the same relation between money and income… for the UK as for the US, Yugoslavia, Greece, Israel, India, Japan, Korea, Chile and Brazil?”
The problem with this? Friedman just made it up. No such relation existed. went back and crunched the numbers. Kaldor is his responded in his ‘The Scourge of Monetarism’:
“The simple answer to this is that Friedman’s assertions lack any factual foundation whatsoever. They have no basis in fact, and he seems to me to have invented them on the spur of the moment. I had the relevant figures extracted from the IMF statistics for 1958 and for each of the years 1968 to 1979, for every country mentioned by Friedman and a few others besides… Though there are some countries (among which the US is conspicuous) where in terms of the M3 the ratio has been fairly stable over the period of observation, this was not true of the majority of others.”Kaldor, N. 1982. The Scourge of Monetarism. Oxford University Press, Oxford and New York.
Kaldor summed this up well in a speech to the House of Lords regarding this little ‘blunder’ on the 16th April 1980. There he said:
“Professor Friedman, as on some other WELL-KNOWN OCCASIONS, invented the facts to clinch the argument, and relied on his reputation as an expert for being taken on trust without anyone bothering to check the figures.”
source: The UK Forum for Post Keynesian Economics
Keynes Seminar in Cambridge
Professor Richard Kahn on The Scourge of Monetarism (11 December 1987).
Professor Paul Diesing a Economist.and Philosopher of Science that worked closely with Friedman at University of Chicago,points out in his valuable article ‘Hypothesis Testing and Data Interpretation: The Case of Milton Friedman,” Research in the History of Economic Thought and Methodology, vol. 3, pp. 61-69.: that Friedman “tests” hypotheses by methods that never allow their refutation.
Diesing lists six “tactics” of adjustment employed by Friedman in connection with testing the permanent income (PI) hypothesis:
1. If raw or adjusted data are consistent with PI, he reports them as confirmation of PI
2. If the fit with expectations is moderate, he exaggerates the fit.
3. If particular data points or groups differ from the predicted regression, he invents ad hoc explanations for the divergence.
4. If a whole set of data disagree with predictions, adjust them until they do agree.
5. If no plausible adjustment suggests itself, reject the data as unreliable.
6. If data adjustment or rejection are not feasible, express puzzlement. ‘I have not been able to construct any plausible explanation for the discrepancy’…”
In a proposed Op Ed column written in 1990, Elton Rayack (Not So Free To Choose, New York: Praeger, 1987) pointed out the interesting fact that while Friedman’s models did well in retrospective fitting to historic data, where the Friedman testing methods could be employed, they were abysmal in forecasts, where “adjustments” could not be made. Rayack reviewed eleven forecasts of price, interest rate, and output changes made by Friedman during the 1980s, as reported in the press. Only one of the eleven was on the mark, a not-so-great batting average of .092;
“not enough to earn a plaque in baseball’s Hall of Fame, but evidently quite adequate to qualify [Friedman] as an economic guru.” The guru was, however, protected by the mainstream media; Rayack’s piece was rejected by both the New York Times and Wall Street Journal.
We may conclude says Elton Rayak that Friedman’s truly pathbreaking innovation as an economist has been in the art of what is called “massaging the data” to arrive at preferred conclusions. This innovation has been extended further by other members of the Chicago School.”
Professor Edward S. Herman writes further in Triumph of the Market, Boston: South End Press, 1995, p. 34-37 : about Milton Friedman:
“Friedman was considered an extremist and something of a nut in the early postwar years. As Friedman has not changed, and is now comfortably ensconced at the conservative Hoover Institution, his rise to eminence (including receipt of a Nobel prize in economics), like that of the Dartmouth Review’s Dinesh D’Souza, testifies to a major change in the general intellectual-political climate.
Friedman is an ideologue of the right, whose intellectual opportunism in pursuit of his political agenda has often been heavy-handed and sometimes even laughable. The numerous errors and rewritings of history in Friedman’s large collection of popular writings are spelled out in admirable detail in Elton Rayack’s Not So Free To Choose.[2] His “minimal government” ideology has never extended to attacking the military-industrial complex and imperialist policies; in parallel with Reaganism and the demands of the corporate community, his assault on government “pyramid building” was confined to civil functions of government. As with the other Chicago boys, totalitarianism in Chile did not upset Friedman-its triumphs in dismantling the welfare state and disempowering mass organizations, even if by the use of torture and murder, made it a positive achiever for him.
Friedman’s reputation as a professional economist rests on his monetarist ideas and historical studies, his analysis of inflation and the “natural rate of unemployment,” and his theory of the consumption – income relationship (the so-called “permanent-income” hypothesis). These are modest achievements at best. His monetarist forecasts have proven to be as wrong as forecasts can be, and the popularity of monetarism has ebbed in the wake of its failures…
The Chicago School intellectual tradition traces back to University of Chicago professors Frank Knight and Henry Simon, who flourished in the 1920s and 1930s. These men were conservative, but principled and iconoclasticThese men were conservative, but principled and iconoclastic. Simon’s 1934 pamphlet, “A Positive Program for Laissez Faire,” actually called for nationalization of monopolies that were based on incontrovertible economies of scale, on the grounds of the evil of private monopoly and the inefficiency and corruptibility of regulation of monopoly.
The post-World War II Chicago School, led by Milton Friedman and George Stigler, has been more political, right-wing, and intellectually opportunistic. On the monopoly issue, for example, in contrast with Simon’s 1932 position, the post-World War II School’s preoccupation was to dispute the importance and damaging effects of monopoly and to blame its existence on government policy. The postwar school is also linked to U.S. and IMF policies toward the Third World, in its pioneering service, through the “Chicago boys,” as advisers to the Pinochet regime of Chile from 1973 onward. This alliance points up the School’s notion of “freedom,” which has little or nothing to do with political or economic democracy, but is confined to a special kind of market freedom. As it accepts inequality of initial economic position, and the privilege and political influence built into corrupt states like Pinochet’s (or Reagan’s), its economic freedom is narrow and class-biased. The Chicago boys have always claimed that economic freedom is a necessary condition of political freedom, but their tolerance of political non-freedom and state terror in the interest of “economic freedom” makes their own priorities all too clear. The Chicago School’s attitude toward labor was displayed in the Chicago boys’ complacence over Pinochet’s use of state terror to crush the Chilean labor movement. The School’s general tolerance of monopoly on the producers’ side has never been paralleled by softness toward labor organization and “labor monopoly.” Henry Simon himself developed a pathological fear of labor power in his later years, as evidenced in a famous diatribe “Reflections on Syndicalism,” which may have contributed to his committing suicide in 1944. Subsequently, the labor specialists of the postwar Chicago School, most notably Albert Rees and H. Gregg Lewis, dedicated lifetimes to showing that wages were determined by marginal productivity and that labor unions’ pursuit of higher wages was futile. (Rees, however, did acknowledge the non-economic benefits of labor organization in his class lectures.) Chicago School analyses stressed the wage-employment tradeoff and the employment costs of wage increases based on bargaining power (as opposed to those negotiated individually and reflecting marginal productivity). They linked collective bargaining to inflation, viewing “excessive” wage increases as the pernicious engine of inflationary spirals. Milton Friedman’s concept of a “natural rate of unemployment” was a valuable tool in the arsenal of corporate and political warfare against trade unions-a mystical concept, unprovable, but putting the ultimate onus of price level increases on the exercise of labor bargaining power….”

3. #11 by metatone on July 6, 2013 - 5:36 pm

For me I don’t actually mind (in principle) a hydraulic model that happens to work reasonably well. (I view the DGSE + financial frictions as a similar kind of metaphorical model to hydraulics, hence the phrase.) You can do a lot of useful engineering with such a model.

Problems I do have:

1) Economists think they are doing science, not engineering, so it does become problematic.

2) All the work to remove multiple equilibria disturbs me greatly, because single equilibrium is purely dogma. There’s plenty of empirical evidence that suggests that economies have multiple equilibrium points. Further every decent theory of other complex systems (e.g. ecosystems, complex signal-response systems, etc.) suggests multiple equilibria – why would economics be any different?

What’s worrying is the answer isn’t actually “we do it for mathematical tractability” – rather, it’s much more that once you admit multiple equilibria, economics is no longer about “optimal allocation of scarce resources” but a set of political defences around particular equilibria that each have a long list of winners and losers. And economists don’t like to think about actually justifying their choices of winners and losers, they’d rather pretend that “a rising tide lifts all boats” – and single equilibrium is central to that conceit.

• #12 by Unlearningecon on July 7, 2013 - 2:17 pm

Yeah, multiple equilibria mean that economists cannot proudly parade their dead-cert prescriptions and implications the way they currently do, which would take a step toward making them seem more like other social scientists and therefore dethroning them.

Even Keynes’ General Theory showed that even assuming perfect competition etc, an equilibrium at more or less any level of employment was a logical possibility – no wonder it was greeted with so much hostility and revisionism.

• #13 by Min on July 9, 2013 - 2:47 pm

Thanks for the suggestion about why economists seem to avoid talking about multiple equilibria. The idea of a depression or lost decade as a suboptimal equilibrium seems quite plausible, but how many economists say so?

One possible way to get out of a suboptimal equilibrium is to change the payoffs for different courses of action. That can be done through gov’t intervention. But there seems to be a prejudice against gov’t intervention on the part of many economists. Is that one reason for not talking about multiple equilibria?

• #14 by Unlearningecon on July 10, 2013 - 6:38 pm

But there seems to be a prejudice against gov’t intervention on the part of many economists. Is that one reason for not talking about multiple equilibria?

I would guess so. Though it is worth noting that all equilibria under the SMD theorem are guaranteed to be Pareto Efficient, so economists could still reject government intervention on those grounds.

4. #15 by Rob Rawlings on July 6, 2013 - 5:53 pm

On the subject of model generally, what is your opinion of the kind of model that Steve Keen uses, such as this one ?

http://keenomics.s3.amazonaws.com/debtdeflation_media/papers/KeenModelEndogenousCreditandCreditCrunch.pdf

There are various kinds of agents (banks, workers, firms etc) whose behavior is specified. The model then generates results that are very consistent with post-Keynsian theory. It doesn’t take much analysis however to work out that these results are implicit in the assumptions made about the behavior of the various agents. Change those assumptions and the results change. I’m fairly sure that one could build a very similar model to the one that Keen uses to demonstrate Austrian Business Cycle theory.

These models seems very useful to 1) illustrate a specific theory and 2) prove. that it is internally consistent. But in and of themselves they say nothing about the real world, and the validity of the models can easily be undermined by questioning the empirical correctness of the assumptions made in the same way as you do with DGSE.

• #16 by BFWR on July 7, 2013 - 12:25 am

I admire Keen’s iconoclasm. However, he like virtually every other economist hasn’t been iconoclastic enough. This is a problem with economics in general and in modeling as your post accurately assesses. It is also an all too human problem that orthodoxy is latched onto and makes individuals stubbornly unwilling to even look for solutions or look more closely for them as is the case with economists. Keen and many other economists have recently criticized neo-classical economists for ignoring banks and money and accounting. They are correct, but accounting has a subset of cost accounting which they haven’t looked at and where the empirical data of every business that isn’t in bankruptcy proceedings exposes the continual scarcity of individual incomes to prices created in any given period of time. Their assumption that velocity theory of money’s re-circulation provides enough money for markets to clear is false. They have not examined this dogma. Consequently their acceptance of its truthfulness is also a reason for their not looking at the data I am speaking of. Velocity theory is both anti-historical and false because it assumes you can abstract out the context within which all money actually in the economy must be subject to and that is cost in general and cost accounting in particular.

Economists need to look at the economy from an exteriorized viewpoint so they are less personally involved and so that they can see it as a whole. The analogy of looking at flight from an exteriorized viewpoint from outer space is a good one. Flight seen from orbit recognizes that it is actually controlled falling. The economy is controlled falling also. However, if you find a Bernoulli’s principle for economics and apply it continually through time the economy can fly as long as you keep its fuel supply adequate.

P = In < Pr The act of production itself equals/creates a scarcity of total individual incomes to total prices. Fix the right hand side of that equation in a way that is obviously defined by the problem there and you have a Bernoulli's principle for the economic and monetary systems.

• #17 by Roman P. on July 7, 2013 - 8:27 am

BWFR,

I don’t think that I fully understand your point about the cost accounting. Could you elaborate?

• #18 by BFWR on July 7, 2013 - 11:51 am

Every dollar is placed in and subject to the costing system, which is the subset of double entry bookkeeping known as cost accounting. This process applies also to any money re-circulated back through the economy as well. The assessment of costs is necessary to know whether profit is being made and how much. It is an utterly embedded aspect of commerce itself.

The strictest convention of cost accounting is that all costs must go into price. Labor, that is individual costs, will always be only a fraction of total costs, and in modern technologically advanced economies that fraction is continually becoming smaller because of the increasing cost of capital equipment. As an example and simplified:

If a business gets a loan for \$100 k for start up and during a years time they spend \$40k for capital equipment, \$10k for a lease, utilities, interest on their loan and doughnuts for the secretary pool and \$50k to its laborers, management and owners, then the ratio of money distributed to individuals and other organizations is 50 : 50. Yet all costs must go into prices. Consequently charted over a year’s time the rate of flow of prices will be at least \$100k while the individual incomes created and distributed will be \$50k. This will show two diverging upwardly sloping lines. This chart with varying statistics will have the same diverging lines character. Re-circulated money, no matter what amount, has no ability to change the diverging character of these lines because as it goes back through a business once again the physics of the costing system will always create and re-create….more prices than individual incomes.

Hence the productive process itself, even at the moment of a product or service’s INITIAL AND SO LEAST EXPENSIVE PRICE…is price inflationary without even considering that more prices than incomes is produced in each subsequent step in a product’s journey to its ultimate cost/price at retail sale to an INDIVIDUAL. Because the productive process is price inflationary that means it is individual income and business profit deflationary, in other words the value of purchasing power and profit decreases.

Prices to INDIVIDUAL incomes is the most basic and most relevant metric of the productive process. The ratio of labor costs and total costs created by the productive process will always show this scarcity of total individual incomes to total prices. Hence markets cannot clear unless the rate of flow of individual incomes is supplemented enough to equate with the rate of flow of prices simultaneously created, and in a way that does not add additional cost by being injected into commerce first. The only way this can be accomplished or approximated is a GIFT of money directly distributed to individuals which they then inject into commerce so that consumer prices may actually have a chance of being fully liquidated.

• #19 by Roman P. on July 7, 2013 - 5:52 pm

BFWR,

uh… Not to put you down, but you are probably rediscovering Kalecki’s theories here, even if he put it in different terms. I also don’t think that modern economies are set in a way that clearing of the commodity markets is desired: people don’t buy everything that is produced, and the waste (something like all sausages not bought and eaten or all badly designed clothes that sit in the stores) is a buffer stock that helps the economy to adjust.

• #20 by BFWR on July 7, 2013 - 7:21 pm

Waste is also a problem, especially macro-economically, but you’re not addressing my point which is the INDIVIDUAL monetary scarcity.

I’n talking about C. H. Douglas’s ideas, which pre-date Kalecki, whom I also like.

• #21 by Roman P. on July 8, 2013 - 7:19 am

BFWR,

I think that C. H. Douglas was confused in his macroaccounting: of course the total of workers incomes is not enough to buy the goods and services they produce, but otherwise there’d no surplus for capitalists and capitalists wouldn’t be able to buy anything. If, for example, a factory produces 1000\$ worth of goods and pays workers \$1000 of salaries, where it is going to get money to pay for the materials, electricity, maintenance, etc? The point is, the labour is not the only sector in the economy that buys and spends.

I don’t doubt that Douglas’s heart was in right place, as most heterodox theories show that the incomes of various sectors are determined by the power relations between them – but nothing stops society from deciding to redistribute incomes and real resources in favour of some sectors. Giving people some guaranteed income is a relatively popular proposal and very much in spirit of Douglas’s ideas, I believe. But, to be honest, I don’t know much about C. H. Douglas ideas and most of it I gleaned from secondary sources. I am kind of surprised by how widespread the movement of his followers was in the middle of last century.

• #22 by BFWR on July 8, 2013 - 9:55 am

Forget about the capitalist, he does a pretty good job of making a profit today even with a scarcity of individual incomes because he begins with adequate financing (as in the example I gave of \$100k to start up the enterprise) and is allowed to expense his costs with depreciation etc. The individual not only is shortchanged in his pay he is taxed and is also not credited with the capital appreciation that he has helped to build. This is not worker ownership or control of enterprise as in socialist or communist ideology…it is merely correcting a flaw in the cost accounting system….so that individuals can liquidate prices, the capitalist can (more easily) make a profit….and the entire system and the individuals in it…..can be free in fact….instead of only in theory.

As for where is the money going to come from, it (the dividend and the discount) will come from an agency mandated to distribute them interest free ONLY TO INDIVIDUALS based solely on consumption and production statistics ex nihilo just like a central bank does today. And private banks will lend just as they do today, that will not be a problem so long as they follow good banking practice. Inflation? Nope, nope, nope. In the first place there is an inadequacy of individual incomes and when you provide enough income to the individual….the individual will have much less problems paying his bills and any loans he might want to take out, and eventually there will probably be much less need or desire by him to borrow at all. The government will need to inject less money into the economy as a result also. No inflation, not monetary inflation as I just pointed out and no price inflation because of Social Credit’s discount mechanism.

• #23 by thehobbesian on July 7, 2013 - 2:19 am

“These models seems very useful to 1) illustrate a specific theory and 2) prove. that it is internally consistent. But in and of themselves they say nothing about the real world, and the validity of the models can easily be undermined by questioning the empirical correctness of the assumptions made in the same way as you do with DGSE.”

That is very true. For example, one could look at Keen’s model and just as easily say that rather than being the cause of market failure, the increase in indebtedness which occurs leading up to a debt deflation scenario is merely a symptom of a different real world problem which is causing individuals and households to take on more debt, and a problem which is not being effectively accounted for in an economy where price signals serve as the major guideline for coordination. I’m not saying I believe that to be the case, but one could easily postulate that business cycles start outside of the finance sector and creep their way in, and obviously given banks role in the economy, they are the breaking point.

Of course if that were really the case the Austrian answer would just be to keep currencies in deflation and have credit which is so tight that the economy never gets moving fast enough for such an event to ever occur. Though I think such an answer to the problem has some serious ethical implications to it since we would be purposely constraining the economy. Which is probably one reason why policy makers don’t want to listen to Austrians (despite their contention that it is just because those statists just want power).

• #24 by BFWR on July 7, 2013 - 2:39 am

if Austrians would get past their market worshiping they might have a closer affinity with Social Crediters because they both want to factor in and consider the individual. I have battled Austrians for years over on Mish Shedlock’s blog. They are as ideologically fossilized as one can get. All well meaning of course, but they just cannot countenance any non-worshiping of the market. It’s god to them.

• #25 by Unlearningecon on July 7, 2013 - 1:50 pm

I agree – but in my opinion, Keen’s assumptions are empirically correct; moreso than in DSGE, anyway. That’s really half of the point I’m making in the post.

• #26 by thehobbesian on July 7, 2013 - 10:05 pm

Yes that’s true. In terms of identifying how the process is happening, Keen’s model is empirically more sound than most other models. But of course that’s the same with most debt deflation models, I personally see debt deflation as the most logical and empirically valid model of crises like the great depression. However, while debt deflation describes the process brilliantly, debt deflation alone doesn’t explain the why part completely. Keen seems to think that it is debt itself which is driving the whole thing, whereas I am not sure if targeting debt alone is really the right answer, and that the cause of the indebtedness is in all likelihood occurring because of something other than indebtedness for the sake of indebtedness. Though it sounds like Keen is saying that it is indebtedness just for the sake of indebtedness, and I’m not sure if I buy that.

• #27 by Tom Hickey on July 7, 2013 - 10:34 pm

Keen’s point is that the ratio of private debt to GDP has been extraordinarily high and remains very high even with recent deleveraging. The question is why this apparent anomaly happened and it seems to be explained by the stagnation of worker income over the past several decades with workers making up the shortfall in income with debt to maintain lifestyle. The financial sector obliged with laxer and laxer credit standards to the point that the financial cycle culminated in the Ponzi stage described by Minsky, sparking a Fisher debt-deflation that developed economies are still struggling with. The imposition of fiscal austerity is resulting in prolonged demand leakage in that the demand gap from increased private saving-delevering is not being offset with net exports. As a result some economists are calling for a debt jubilee (Keen, Hudson), while others favor using larger deficits to offset demand leakage to saving and creating space for delevering (Mosler, Wray, Mitchell).

• #28 by BFWR on July 7, 2013 - 10:41 pm

“I am not sure if targeting debt alone is really the right answer, and that the cause of the indebtedness is in all likelihood occurring because of something other than indebtedness for the sake of indebtedness.”

Exactly. And I assert that what is driving it is the inherent and underlying scarcity of total individual incomes in ratio to total prices. This scarcity erodes purchasing power and profits and the only way to try to solve this presently is to inject more money into the economy in the form of loans. Hence the build up of debt. But this doesn’t solve the problem only palliate it. The problem is there is too much lending…..and too little giving. The Banks of course fight this idea because their income is mostly derived from lending. Excuse me, but fuck them. They are entitled to exist, but they are not allowed to dominate with debt or enforce an unworkable system on us.

“Systems were made for men, and not men for systems, and the interest of man which is self-development, is above all systems, whether theological, political or economic.”

• #29 by Min on July 9, 2013 - 2:57 pm

Tom Hickey: “Keen’s point is that the ratio of private debt to GDP has been extraordinarily high and remains very high even with recent deleveraging. The question is why this apparent anomaly happened and it seems to be explained by the stagnation of worker income over the past several decades with workers making up the shortfall in income with debt to maintain lifestyle.”

The same kind of thing happened in the lead up to the Great Depression.

• #30 by thehobbesian on July 9, 2013 - 9:31 pm

“The problem is there is too much lending…..and too little giving”

Yes, and that is also because of the problem that we have decided to have a monetary system where all new money has to come from a fractional reserve system, hence lending and debt. And because you can’t test economic policies in a lab, as well as monied interests, people are too scared to try out anything new.

• #31 by BFWR on July 9, 2013 - 9:51 pm

If the system is unworkable….we need to change it. Actually fractional reserve is workable so long as you supplement people’s incomes enough to make up for the deficit of same caused by the physics of production itself. Without such supplement the NECESSITY to borrow in order to keep the system “up in the air” results in the continuous build up of debt until eventually, even if lending were done at 0% interest, the debt becomes unserviceable.

5. #32 by Unlearningecon on July 7, 2013 - 3:26 pm

BFWR,

I think your idea, as I understand it, is interesting (every individual must have at least a certain income to allow spending to clear markets) and am happy to discuss it.

• #33 by BFWR on July 7, 2013 - 7:50 pm

Unlearning,

Good, and thanks. I would mention one thing that is problematic for me however. And that is, if the problem I am pointing at actually is the most basic problem of economics then its effects remain continually relevant to nearly every subsequent problem. Hence its hard not to relate the problem back. I fail to see how we can get any more basic a problem than the ability or inability of people with incomes to consume products, that is liquidate prices. It also goes directly to the issue of inherent stability or instability. I DO understand that mentioning one thing over and over begins to make one APPEAR to be a crank. but failure to confront a most basic problem is a worse sin than actually being a crank. So again, the above is my problem. I hope you will consider the truthfulness of it. Again thanks, and I look forward to a focused discussion of the inherent scarcity of individual incomes to prices.

• #34 by thehobbesian on July 7, 2013 - 8:24 pm

“every individual must have at least a certain income to allow spending to clear markets”

One could say that to survive in the monetary economy, you need to fit this equation x/y ≥ 1, with x being money inputs and y being money outputs. It can never drop below 1 because it is physically impossible for one to spend money which they don’t physically have. Now its true that if one takes up debt and spends it, that they are technically “in the red”, but that is only in the sense of social/legal obligations. The fact remains that it is still x/y ≥ 1, with the credit being lent acting as the money input in a purely secular sense. And since the monetary economy is purely secular in that it doesn’t matter whether x comes from sustainable income or if x comes from credit, any capitalist economy which relies on money as its medium of exchange has a fundamental error in that monetary velocity and price signals may not be reflecting the indebtedness which is occurring. Economic calculation in the capitalist commonwealth is fatally flawed when most costs are reflected in monetary terms. Because in a true barter economy, or competitive equilibrium, economic actors don’t need to fit within the x/y ≥ 1 equation, or it at least is not reflected in the same way (some costs are socially borne, one can use labor as the payment in a bartered transaction, unquantifiable sunk costs, etc.). So by having a monetized economy, we are in fact placing a fatally flawed construct onto human economic activity.

It is also true that while one must have x which is equal to or greater than y to survive as a monetized market entity, one does not need to have y to have x. In other words, money can be taken out of the exchange and hoarded by someone who never spends it, but it can’t be spent by someone who doesn’t have it. Unless of course you are a government printing money, but even then, the costs of inflation are still externalized into the economy. So money cannot enter into the stream of commerce without incurring some cost or input for the spender, but it can be taken out of the stream of commerce without incurred costs to the user. It is a one way street. Which is why we can get things like liquidity traps.

So essentially one can say that the use of money is actually quite screwed up, and most of us are unwilling to even consider that as the root of the problem because we are culturally embedded with the notion that money is somehow natural and fine and that it should work perfectly. Not that I think the benefits of having money as a medium of exchange don’t outweigh the costs, its just that use of money does bring with it some inherent problems that shouldn’t be overlooked.

• #35 by BFWR on July 7, 2013 - 9:01 pm

All of which is true, but for practical purposes the important fact is the inherent scarcity of total individual incomes in ratio to total prices. Solve that with a sufficient gift to let the individual survive month on month and then if a bunch of big assed billionaires either hoard or decide to spend their money that will make the monthly discount on prices either more or less…but the stable “flight” of the economy will be established as doable and we can go forth from there and resolve any other problems of economics.

The genius of Douglas’s thinking was at least twofold. First he actually discovered and then confronted the actual problem, and two he took the engineer’s perspective which is not to consider in any way whose ox was going to be gored, but simply to solve the problem. Economics needs such minds and ways of thinking.

• #36 by Tom Hickey on July 7, 2013 - 10:19 pm

“Unless of course you are a government printing money, but even then, the costs of inflation are still externalized into the economy. So money cannot enter into the stream of commerce without incurring some cost or input for the spender, but it can be taken out of the stream of commerce without incurred costs to the user. It is a one way street.”

Not necessarily. As long as the government offsets the saving desire of non-government and any resulting demand increase can be met by expanding capacity. See the work of Wynne Godley on stock-flow consistent macro modeling and sectoral balances — household, firm, government and external sectors sum to zero by identity.

See also the Kalecki profit equation: gross profits = investment + government deficit + net exports – workers savings + capitalist consumption

Demand leakage to saving in a sector, e.g., consolidated domestic private saving, can be offset by the sum of the other sectors. Right now, the policy of most government is to use the export channel, which all countries cannot do simultaneously. Moreover a stagnating global economy is closing that door.

That leaves fiscal policy to lose the demand gap, since monetary policy is insufficient as recent cb special ops go to show — unless you believe the market monetarists, but they are in the dark on how the existing monetary and financial systems actually operate.

• #37 by thehobbesian on July 8, 2013 - 12:43 am

“Not necessarily. As long as the government offsets the saving desire of non-government and any resulting demand increase can be met by expanding capacity.”

Yes, that is true, there are ways to introduce new money without it incurring too much costs or deleterious effects of inflation. I didn’t want to make my post any longer than it had to be so I just kind of gave a generalized account of new money introduction. Plus the internet is full of so many monetarist radicals who will lambast you if you say anything that appears to be pro-inflation and I really don’t want to have another argument with those morons about their contrived views on how money works. So I stayed conservative with the part about new money creation. But you are right, governments can introduce new money in ways that help, and overall I think that government regulation and monetary policy can do a lot to help avoid some of the pitfalls of the flaws inherit to the use of money.

• #38 by Min on July 9, 2013 - 3:02 pm

Inflation is not necessarily costly. In fact, inflation is our friend, as history shows. It may not be the friend of the creditor, but it is to economies as a whole, or else it would not be ubiquitous, under various political, economic, and monetary regimes.

• #39 by BFWR on July 9, 2013 - 5:33 pm

What if inflation is the nature of the productive process? And what if technological innovation wed to profit making systems inevitably means fewer jobs and less income? How could we ever resolve such a problem?

• #40 by thehobbesian on July 9, 2013 - 9:26 pm

@ min
You are right, I am actually a big fan of inflation, at least in the sense that I see it as superior to deflation and that I think that dynamic price stability is probably impossible. Even though it may bring certain costs it is clearly the best way forward, until we find something better.

• #41 by Min on July 10, 2013 - 12:39 am

What if innovation inevitably led to fewer and fewer jobs? (And less income.)

That is not a new problem, is it? It is a basic challenge of the industrial revolution. Benjamin Franklin, among others of his time, regarded the industrial revolution as an opportunity, envisioning a future of general prosperity and leisure. Well, that is not a path that we seem to have taken, is it?

Annie Besant thought that the gov’t would have to employ an army of the unemployed. We have not taken that path, either. Maybe she will be proven right, after all.

We seemed to have solved that problem, more or less, after WWII. But then we made a detour. Frankly, I am optimistic about the future. Humans used to work only a few hours per day to survive. If our machines and computers make such an existence possible again, I think that we will figure out how to have such a life again.

• #42 by Unlearningecon on July 10, 2013 - 6:44 pm

Frances Coppola actually has an interesting post on this. It could be that we end up with an abundance of stuff, but nobody has any income to buy it!

Well, that is not a path that we seem to have taken, is it?

Because it’s not profitable! When more productive technology emerges, capitalists have two choices: produce more or hire people for half the time. The former is surely more profitable.

Humans used to work only a few hours per day to survive.

Do you mean under ‘primitive communism’?

• #43 by BFWR on July 10, 2013 - 7:01 pm

“Do you mean under ‘primitive communism’?”

We no longer have to face the onerous choice of scarcity. Technology is the forgotten/never recognized factor in production that makes that reality (within reason) no longer relevant. All we have to do is “pay the wages of the machines”….to real human beings. The problem (and solution) is looking the bully of a dominating Finance squarely in the eye and insisting that their monopoly on credit creation end, and the purposes for which it is granted be expanded.

• #44 by Min on July 13, 2013 - 10:26 pm

Thanks for the Coppola ref. 🙂

As for short work weeks, tribal societies in modern times seem to have them. Also I remember an Indian guru recommending working for 4 hours per day. I don’t think that communism of any ilk has to do with that.

6. #45 by Tom Hickey on July 7, 2013 - 4:09 pm

A theoretical model works through topological logic, that is, the theoretical system (elements in relationship) represents what is being modeled as a possible world constructed logically of a network of information that is descriptive of possible states of affairs. Models that claim to be representation assert that the possible world they represent actually exists in the real world. That is to say, the information system of the model fits information gained from observation. A model can never prove it own truth empirically, which can only be determined through observation. These are basic principles of scientific method and fundamental to hypothesis testing. Representational models are tested by comparing hypotheses they entail to information gained from empirical data.

Popper’s falsifiability criterion is applied to hypotheses embedded in a theory rather than to the theory itself, although it is possible that a theory might hang on a crucial hypothesis. Moreover, all theories are problematic to some degree and anomalies eventually arise, since humans are not omniscient and knowledge is never complete. Models and variations within models compete with each other and at turning points the one that meets the accepted criteria eventually prevails, although this is generally not something that happens immediately for a variety of reasons affecting the debate, not the least of which are biases and vested interests.

It is possible that many models might be constructed to explain the same phenomena. Then the competing models are evaluated based on accepted criteria such as consistency, comprehensiveness, correspondence and parsimony. While it is possible to view planetary motion through the eyes of Ptolemy, viewing it through the eyes of Copernicus and Newton excels in consistency, comprehensiveness, correspondence and parsimony. Even though counterintuitive — the sun doesn’t really rise even though it appear to — the heliocentric view is preferable in just about every other respect, not only tractability. It’s a superior explanation on accepted criteria and allows for more accurate prediction.

However, its acceptance had to overcome two huge cultural and institutional hurdles, the Aristotelian assumption that planetary motion is perfect and therefore must be circular and the theological dogma that humans are at the center of the universe. Those were diehard views that took considerable time to overcome. Are we seeing something similar wrt to neoclassical models based on law-like behavior (supply and demand), rationality (revealed preference for max u), and tendency to a single equilibrium?

Neoclassical modeling persists largely through its momentum rather than explanatory and predictive success as a scientific theory (as claimed). A large reason for this is the unreasonable criterion that neoclassical econometricians set for acceptance of a theory in terms of a highly tractable mathematical model. This is unreasonable for the same reason neoclassical models often fail.

In order to make the model tractable, the methodological convenience of modeling a simple system is adopted with social systems are non-ergodic complex adaptive systems characterized by reflexivity and emergence. So what they demand cannot be achieved by them or anyone else. As soon as cet. par. is introduced, for example, a model is not longer dynamic and complex. It’s possibly justifiable for Econ 101 or teaching history of economics, but as economic methodology it is doomed to fail and it has. It turns out that the extreme preference for consistency and parsimony vitiates correspondence and comprehensiveness, which are much more important criteria in science, where theories are about explanation and prediction instead of formal precision and elegance.

• #46 by Unlearningecon on July 9, 2013 - 12:09 pm

Thanks, great comment.

The epicycles analogy just strikes me as so appropriate in this case. We have DSGE, which is highly complex and based on an increasing number of deus-ex-machina type unrealistic mechanics, all (seemingly) in the name of preserving mathematical beauty. We then have Keen’s model, which is far less in depth and more of a shell, but a lot simpler and more plausible, and which gets key mechanics correct.

• #47 by Min on July 9, 2013 - 3:23 pm

BTW, let’s not be too hard on epicycles. They were the result of successive refinement. (Similarly, any sound can be decomposed into sine waves by Fourier analysis.) Also, even after Kepler discovered the laws of planetary motion, epicycles produced more accurate ephimerides. Newton’s theory of gravity later allowed the computation of perturbations, but that was no mean feat.

• #48 by Min on July 9, 2013 - 3:12 pm

Mathematical tractability is perhaps the Achilles heel for economic models. Computers are now capable of overcoming the problem of intractability through simulation of complex adaptive systems, which economies are. It is no longer necessary for a model to be solvable by hand. 🙂

7. #49 by James Reade on July 8, 2013 - 10:41 am

Honoured to be linked up, but I don’t doubt that the subsequent comment (“although these same economists may still be hostile to outside criticism”) is aimed at me. If not, great.

If so, let me point out what I’ve been saying all along: Substantiate your criticisms!

I’m not at all hostile to well grounded, well supported criticisms that display awareness of what economists are actually doing, not what a few bloggers or the biggies from 20-30 years ago were doing.

I make them all the time myself in my teaching and research. In fact, if economists were so cozy to all our models, why would research still be ongoing and fruitful?

But to do them in my research, I have to substantiate what I’m saying, and you just don’t do this. That’s why I keep asking you to provide links to papers where the things you assert are taking place. I’m sure they are – I just want to see some evidence.

• #50 by Unlearningecon on July 8, 2013 - 2:07 pm

Honoured to be linked up, but I don’t doubt that the subsequent comment (“although these same economists may still be hostile to outside criticism”) is aimed at me. If not, great.

It wasn’t really aimed at you, moreso the econjobrumours guys, who seem to flit between disowning DSEG and UG economics and treating anyone else who criticises them with religious-esque zeolotry.

I’m not at all hostile to well grounded, well supported criticisms that display awareness of what economists are actually doing, not what a few bloggers or the biggies from 20-30 years ago were doing.

Well, the [variant of the] Smet-Wouters model Noah Smith is discussing is the modern mainstay for the ECB and other central banks, and that particular version has clearly been a part of recent macroeconomic research, since the authors try to corroborate it with the financial crisis.

• #51 by James Reade on July 9, 2013 - 9:20 am

Haha please don’t take econjobmarketrumours as any metric of where the discipline is! It takes at least a few years of teaching post-phd to get any sense of the sheer size of the field and its obvious weaknesses – as a phd particularly in macro theory I think students get this inflated sense of where the discipline is. Plus I’m sure there’s a few trolls having fun on there…

On Smets-Wouters, I’m sure central banks still do use it or some variant – which is bad news for us all, not least (and I’m not a fan of the DSGE literature) because it’s a good ten years old now. But in this case I want to make the distinction between policymakers and politicians and economists. Where the DSGE literature is now, for all its faults, has moved on since Smets-Wouters. A quick search myself led me to a number of papers exploring alternative pricing options to Calvo, for example. E.g http://goo.gl/m5oAA. I’m sure the alternatives may not be overly convincing just yet – but progress is being made.

8. #52 by Luis Enrique on July 8, 2013 - 10:47 am

afaik ….

log linearisation is just an approximation method. Models that are solved numerically make all manner of approximations, it’s the nature of things. There are DSGE models around that don’t log linearize.

There is a big difference between a theoretical possibility that does not appear to have much empirical relevance (reswitching?) and hence it makes sense to ignore, and something that has empirical (and theoretical) importance – price stickiness – but which requires an “unrealistic” mechanism (Calvo pricing) so that it can be introduced into the model in a tractable way, and one which is reasonable realistic in terms of its behaviour. I don’t think the inconsistency you identify is as powerful as you think (although some inconsistency is inevitable, you are always going to be a position of accepting some short-cuts whilst rejecting others).

Finally you are using out-of-sample forecasting as a stick to beat DSGE with. Fine. Why don’t you hold Keen’s model to the same standard? How well you you think he model would have predicted outcomes if it had been calibrated in 2000 and asked to forecast the next 10 years?

• #53 by Luis Enrique on July 8, 2013 - 3:04 pm

what would Keen’s paper look like as a mainstream model?

perhaps something like this, which is a MInskyesque idea:

http://www.ecb.europa.eu/pub/pdf/scpwps/ecbwp1524.pdf

A mainstream economist would want to build up from microfoundations and all that jazz, so we can see which “deviations from perfect markets” are sitting underneath the result, as you’d have it. Regardless of what you think of that approach, a mainstream economist could very well write a paper designed to generate endogenous business cycles from dynamics within the financial sector which periodically bankrupts itself. Much like the paper I provide link to above, a mainstream approach would be to describe some data that gives the model something to explain, and then come up with a model capable of explaining it, to some extent. It would probably not claim the model is suitable for being used by a central bank for forecasting purposes.

There are lots of mainstream models of similar sort (I mean of the designed to make a point sort.

You are normally dismissive of mainstream papers that focus on one mechanism at a time to make a point.

Why are you now saying you find that approach, in Keen’s model, so attractive?

• #54 by Unlearningecon on July 9, 2013 - 12:06 pm

I understand that LL is just one, though often used, way of doing things.

But the problem seems to be that once you don’t use it, you end up with “anything goes” type solutions to DSGE. This is in contrast to the numerical methods used in, say, engineering, which are just approximations but give similarly chaotic results.

There is a big difference between a theoretical possibility that does not appear to have much empirical relevance (reswitching?) and hence it makes sense to ignore, and something that has empirical (and theoretical) importance – price stickiness – but which requires an “unrealistic” mechanism (Calvo pricing) so that it can be introduced into the model in a tractable way, and one which is reasonable realistic in terms of its behaviour.

But does Calvo pricing give a good imitation of real world sticky prices? Seems to me that it doesn’t: the reason prices are sticky is not because they only change at seemingly random intervals. Instead, it is more to do with business plans and so will be somewhat predictable. I’d like to see some research on this.

My problem is really just that there seem to be so many ‘fudges’ one way or another, all used to preserve the modelling approach, that the method is becoming more and more convoluted and unrealistic, and increasingly unhelpful.

Finally you are using out-of-sample forecasting as a stick to beat DSGE with. Fine. Why don’t you hold Keen’s model to the same standard? How well you you think he model would have predicted outcomes if it had been calibrated in 2000 and asked to forecast the next 10 years?

Well, the model didn’t exist then. But you are correct: once Keen’s model is fully built, I will hold it to a similar standard.

• #55 by Luis Enrique on July 9, 2013 - 12:45 pm

“But the problem seems to be that once you don’t use it, you end up with “anything goes” type solutions to DSGE.”

no, I don’t think that’s right. Plenty of non-linear models have unique equilibria. I think the sources of multiple equilibria lie elsewhere. This entry into the Palgrave Dictionary might help

http://www.dictionaryofeconomics.com/article?id=pde2008_M000374

(it might be gated, I can’t tell)

no, I’m not claiming Calvo pricing gives us a realistic reason why price stickiness happens. Obviously not. The question is whether the unrealistic mechanism introduces price stickiness into the model in a reasonably realistic way in terms of how often prices are adjusted, how prices respond to shocks etc. realistic in terms of what it doesn’t, not because it realistically models the underlying reasons behind price stickiness. I’m not sure if Calvo pricing even does that, by the way, but if it did, that could justify using it.

Anyway, I thought you liked using crude rule of thumb measures motivated by empirical regularities, as opposed to trying to micro-found everything? Now you seem to want a micro-founded model of price stickiness.

“But you are correct: once Keen’s model is fully built, I will hold it to a similar standard.”

I suspect it’s never going to be “fully built” but if it is, here’s a prediction for you: it’s out of sample forecasting ability will suck, if you want to see anything like the same standards of forecasting ability demanded from DSGE (i.e. wages, investment rates etc. all the variables in the model matching empirical moments).

• #56 by Unlearningecon on July 10, 2013 - 6:39 pm

OK, I’ll give you the last word this time, except this:

The question is whether the unrealistic mechanism introduces price stickiness into the model in a reasonably realistic way in terms of how often prices are adjusted, how prices respond to shocks etc.

Which was exactly what I was questioning.

• #57 by Luis Enrique on July 9, 2013 - 12:47 pm

that should be “realistic in terms of what it does …” above.

9. #58 by Luis Enrique on July 9, 2013 - 10:37 am

This is worth reading on the topic of log linearization and multiple equilbria

http://faculty.wcas.northwestern.edu/~lchrist/research/Zero_Bound/manuscript.pdf

and there is what appears to be an informative thread on econjobrumours (an oxymoron, I know) which is where I got the above link.

http://www.econjobrumors.com/topic/to-log-linearize-or-not-to-log-linearize/page/2#rest_551982

the commentator e38e provides some useful links to papers about comparative statics around unstable equilibria, which remind me of this great post by Nick Rowe, although I had to read the comments before I understood what he was getting at:

10. #59 by Luis Enrique on July 9, 2013 - 5:15 pm

I’m not sure I understand implications of SMD for macro.

the problem doesn’t arise when using a representative agent, so when does it?

I suppose in case of a macro model with some heterogeneous agents and a small set of goods.

presumably such models require aggregate (or market) demand curves for these goods, otherwise again I can’t see how SMD would present problems. My guess is that the demand curves of the heterogeneous agents would be specified in such a way as to yield a simple market demand curve. So where does SMD fit it? Accepting UE’s characterisation of the SMD result, somebody is going to step in as say “whoa there, SMD tells us you cannot assume in general those market demand curves are going to look like that”. But when does macro ever operate on that level of generality? It is always going to involve looking at special cases.

I mean, where are you ever going to go with this, if you are trying to model the economy at an aggregate level, without descending into nihilism? It’s all very well saying SMD says everything goes, but when the Tories slashed public expenditure you did not think the outcome might be that the economy booms and unemployment disappears, so why do you want macro models that say anything goes?

Put it another way, if you are happy with model that doesn’t microfound anything and just says “we will assume investment is an increasing nonlinear function of profits” etc. (i.e. Keen) how could you possibly object to a mainstream macro model making SMD-avoiding assumptions like “”we will assume market demands for consumption goods are decreasing nonlinear functions of prices”?

an alternative route is agent based modelling, that is all about micro disequilibrium dynamics etc. …

take this paper

http://www2.econ.iastate.edu/tesfatsi/AgentBasedDynamics.AntoineMandel2012.pdf

it starts with the words “This paper starts from a reading of the Sonnenschein-Mantel-Debreu …” and finds “we ﬁrst demonstrate that the micro-behavior of boundedly rational agents can lead
to the emergence of equilibrium at the macro-level”

so I don’t think it’s necessarily the case that SMD tells us the assumption of well behaved aggregate equilibria is unreasonable, although I am far outside the bounds of my competence here.

• #60 by srini on July 10, 2013 - 5:50 pm

Luis,

Here is the issue that SMD raises for DSGE in a nutshell:

1. Assuming a representative agent assumes that aggregation is well-behaved when SMD says it will generally not be. So, it is a rare special case which has no relevance for reality.

3. You cannot use the emergent phenomenon argument to support DSGE. In fact, DSGE is anti-emergent phenomena. the Lucas critique, which impelled the DSGE move, effectively states that macro relationships are not stable because they are not invariant to policy changes (I am making a gross simplification because I don’t want to go into legalese about what the Critique means). Moreover, the representative agent behaves no different from a rational individual agent. So, the aggregate behavior mirrors the individual behavior. In emergent phenomena, you would have legitimate and regular macro relationships that have no parallels at the micro level, such as Boyle’s law. The emergent phenomena argument is used by those who support studying macro phenomena separate from microfoundations as a legitimate endeavor.

4. If macro is an emergent phenomena, then we have no reason to believe that optimizing at the macro level is at play.

4. I don’t think the paper you cite supports your contention. The paper says that you can get regular macro behavior from boundedly rational agents. SMD says that you cannot guarantee regular macro relationships from perfectly rational agents.

• #61 by The Hat of the Three-Toed Man-Baby on July 22, 2013 - 2:35 pm

What an embarrassing comment. SMD does not say anything “generally” in the sense a GE theorist would use the word. It says on an open set of parameters any continuous function that satisfies Walras’ law (p*z(p)=0 for any p>=0) and certain technical boundary conditions is an excess demand function. But the theorem does not yield any information about the size of this open set. Since we know the conditions under which weird things happen (strong negative income effects) we can test for them, and most tests reject them.

I suggest that it is you, not Luis, who is hopelessly out of their depth. You don’t even know what the Lucas critique even says.

• #62 by Unlearningecon on July 22, 2013 - 6:03 pm

You can test for them empirically but empirical tests have little relevance when you are constructing a GE model using the deductive method i.e. purely logically, from clearly false axioms.

And as srini himself says, he was simplifying in his exposition of the Lucas Critique.

11. #63 by Luis Enrique on July 11, 2013 - 9:20 am

srini,

1. I am not sure where you get that from, I have not read anything about the frequencies with which market demand curves will take unhelpful forms. Can you provide a link? When a theorist writes something like “in general in cannot be guaranteed market demand curves will slope down” that does NOT mean they usually won’t – it is not a statement about how often things occur, it means in the general case (i.e. with only minimal restrictions on functional forms etc.) the slope is undetermined. I have seen nothing to suggest that when a macro economist assumes set of downward sloping market demand curves, that represents “a rare special case which has no relevance for reality.”

I don’t think emergent behaviour is quite the right term to use here. As I understand it, if you have the micro primitives you can still solve for the market demand curve, it’s just that it won’t necessarily look like a individual demand curves. I thought “emergent” meant you cannot solve for it from the
micro.

You might want to argue that macro behaviour is emergent and can only be modelled as such. Fine. That rules out Keen’s models and most heterodox models except computational agent based simulations.

If it turns out that when you do model emergent behaviour from boundedly rational agents, what emerges is quite similar to the outcomes predicted by our simpler macro models, I think that does support the use of simpler models. For example, suppose the equilibrium predicted by a simple model is like a strange attractor in agent based simulation, that would make perfect sense. After all, nobody who uses equilibrium models things they are anything other than crude (but hopefully informative) approximations.

12. #64 by srini on July 11, 2013 - 12:58 pm

Luis,

Either you are out of your depth or ….

You don’t understand the meaning of emergent behavior.

As to your first paragraph, forget SMD. Assume the we can wave away SMD. The aggregation problems in assuming a representative agent, a representative firm and aggregate capital stock are well known. I am not talking about Cambridge capital controversy. I am talking about Franklin Fisher’s well know results about capital aggregation. Here is the bottom line: the conditions under which capital can be aggregated are simply unrealistic. The same holds for consumers. Even if at a point in time we can back out aggregate utility functions and production functions that mimic the market demand curves, the functions will change when underlying distributions changes. In other words the aggregate functions are not “deep parameters”, they are not invariant. Sorry, the DSGE endeavor is simply fraudulent.

As to agent based modeling, i am skeptical. This is like trying to model thermodynamics from the behavior of particles. Eventually physics got there only to verify that the “macro” thermodynamic laws were consistent with micro behavior. But physics had a correct model of particle behavior. Economics is far, far from where physics was 150 years ago. Most psychologists would laugh at economists models of human behavior, not to speak of neuroscientists. Even the micro model is deeply primitive. I am not even going to get into firm behavior.

• #65 by Luis Enrique on July 11, 2013 - 1:38 pm

srini,

yes I am quite possibly out of my depth.

however, I do know about aggregation problems for production functions, another potential route for attacking DSGE, but we are not discussing that.

I thought the SMD was about the nature of excess demand curves in general equilibrium, and this presented a problem for macro which habitually assuming nice downward sloping market demand curves. This is based on reading S Abu Turab Rizvi (SMD After 30 year) among others. However as I understand it whist SMD says those market demand curves could in theory look like anything, it does not say downward sloping market demand curves are “a rare special case which has no relevance for reality.”

I thought you were saying above that the indeterminacy of the slope of market demand curves in GE was an example of emergent behaviour and questioned that. Perhaps I misinterpreted you. Feel free to explain how I have misunderstood emergent behaviour.

My last para was not a “reading” of the paper I cited, it was an argument with an “if” and a “then”. Perhaps that paper supports my if, perhaps it does not.

ok, so you don’t like computable agent based modelling, you don’t like mainstream, what’s left – old Cowles foundation style macro, estimating relationships between aggregates? Or maybe economics done by psychologists.

13. #66 by Luis Enrique on July 11, 2013 - 1:43 pm

or what, by the way?