Posts Tagged IS/LM

The Crisis & Economics, Part 7: A Case of Myopia?

This is the final part in my series on how the financial crisis is relevant for economics (here are parts 1, 2, 345 & 6). Each part explores an argument economists have made against the charge that the crisis exposed fundamental failings of their discipline, with the quality of the arguments increasing as the series goes on. This post discusses probably the strongest claim that economists can make about the crisis: they do understand it, and any previous failures were simply due to inattention or misapplication, rather than fundamental problems with the theory itself.

Argument #7: “Economists had the tools in place, but we overspecialised and systemic problems caught us off guard.”

Raghuram Rajan was probably the first to take this sort of line, arguing that overspecialisation prevented economists from using the tools they had to foresee and deal with the crisis. But while Rajan’s piece also made a number of other criticisms of economics, over time the discipline seems to have reasserted this argument more strongly: not too long ago, Paul Krugman argued that although “few economists saw the crisis coming…basic textbook macroeconomics has performed very well”. Similarly, Tim Harford claimed at an INET conference last year that the tools necessary to understand the crisis already existed in mainstream economics, and the problem was simply one of knowing when and how to use them. He compared financial crises to engineering disasters, which were understandable using current knowledge but happened nonetheless, due to negligence or oversight on the part of the engineers.

So how true is this claim? Certainly, a number of economic models exist for understanding things like panics, liquidity problems and moral hazard. The most well known of these are the Diamond-Dybvig (DD) model of bank runs – which shows what happens when banks have liquid liabilities (such as demand deposits) which must be available at any time, but have illiquid assets (such as loans) which are not fully convertible to cash on demand – and the Akerlof-Romer (AR) model of financial ‘looting’, which shows that deposit guarantees may create moral hazard as investors gamble other peoples’ money. If you combine tools like these, which help us understand the financial sector, with tools like IS/LM, which tell us how to escape a downturn once it happens, in theory you have a pretty solid set of tools for dealing with the recent crisis.

The first objection I have to these models is that many of their insights could be considered trivial, or at least common sense. The DD model came to the conclusion that deposit insurance might be helpful way to prevent bank runs, which is hardly a revelation considering it came 50 years after FDR and the general public figured out the same thing. The AR model came to the conclusion that deposit insurance and limited liability might create perverse incentives as banks gamble ‘other peoples money’, which again must have been obvious to the policymakers who put Glass-Steagal and other financial regulations in place. Perhaps this point is a little harsh, and I don’t want to overstate it: on the whole, these papers are asking important questions, and in the case of AR they answer them well. Nevertheless, there’s no point in economic theory if it can’t tell us things we didn’t already know. Even the idea that central banks should provide emergency liquidity to banks in trouble is quite obvious, and it predates modern economic theory by a good while.

However, this is not the most important point. The issue I have with these models is that in many of them everything interesting happens outside the model. In Krugman’s favoured IS/LM, a ‘crisis’ is represented by a simple shift in the IS curve, which in English means that a decline in production is cause by…a decline in production. Where this decline came from is presumably a matter for outside the model. Even the most sophisticated macroeconomic models often follow a similar tack, merely describing what happens when the economy suffers from a shock, without exploring possible causes for the shock. Likewise, the DD model suggests bank runs happen because everyone panics, but what causes these panics is not explored: it is assumed depositors’ expectations are exogenous, whether fixed or following a stochastic (random) pattern. Yet studies such as Mishkin (1991) find that bank runs generally follow periods of stress elsewhere in the economy, a fact which DD simply cannot capture.

Economic models are narrowly focused like this because they are generally designed to answer straightforward questions about causality: does the minimum wage cause unemployment; does expansionary fiscal policy cause growth; does a mismatch between illiquid assets and liquid liabilities cause bank runs. But the crisis was an endogenously generated process in which different aspects of the economy – the housing market, the financial sector, government policy – combined to create something bigger than the sum of its parts, and in which it is not possible to isolate a single cause. Consider: the collapse of Lehman Brothers may have triggered the worst of the crisis, but was it really to blame? The economy was already in a fragile place due to systemic trends that can’t necessarily be traced to a single law, institution or actor. Just like the murder of Franz Ferdinand in World War 1, we have to look beyond the immediate and focus on the general if we truly want to understand what happened.

To sum up, the economists above want to argue that they are only culpable insofar as they overspecialised and failed to focus on the right areas in this particular instance. However, the reason for this was not just because of personal myopia; it’s because their chosen methodology means they lack the tools to do so. A model of one aspect of the economy which takes the effect of other areas as exogenous will fail to detect potential positive feedback loops and emergent properties. A model which takes the crisis itself as an exogenous ‘shock’ is even worse, and in many ways is hardly a model of the crisis at all, since it offers no understanding of why crises might happen in the first place. Are there alternatives? I have previously written about how post-Keynesian and Marxist models offer more comprehensive understandings of the financial crisis and antecedent decades; I shan’t repeat myself here. Other promising areas include network theory, evolutionary economics and Agent-Based Modelling. All of these share that they take the system as a whole instead of focusing on isolated mechanics.


I see the crisis in economics as a shock (!!) which hits macroeconomics hard and reverberates throughout the discipline. Regardless of the pleas of some, such events can be seen coming, and they cannot be handwaved away as part of an overall upward trend. And even if individual economists are not in control of policy, key economists have substantial influence, not to mention the theories and ideas in economics as a whole. Recent developments in macroeconomics still leave a lot to be desired, while previously existing tools suffer from similar problems: a lack of holism; a wooden insistence on microfoundations; and attempt to understand everything in terms of simplistic causal links, often relative to a frictionless baseline. Finally, although many areas of economics are not directly indicted by the crisis, many of them share key problems with macroeconomics, and as such the crisis should prompt at least a degree of introspection throughout the discipline.


, , , , ,


Misinterpretations in Mainstream Economics

It is my opinion that major areas of neoclassical economics rest on misinterpretations of original texts. Though new ideas are regularly recognised as important and incorporated into the mainstream framework, this framework is fairly rigid: models must be micro founded, agents must be optimising, and – particularly in the case of undergraduate economics – the model can be represented as two intersecting curves. The result is that the concepts that certain thinkers were trying to elucidate get taken out of context, contorted, and misunderstood. There are many instances of this, but I will illustrate the problem with three major examples: John Maynard Keynes, John Von Neumann and William Phillips.

Keynes, in two lines

It is common trope to suggest that John HicksIS/LM interpretation of Keynes’ General Theory was wrong. It is also true, and this was acknowledged by Hicks himself over 40 years after his original article.

IS/LM, or something like it, was being developed apart from Keynes by Dennis Robertson, Hicks and others during the 1920s/30s, who sought to understand interest rates and investment in terms of neoclassical equilibrium. Hence, Hicks tried to annex Keynes into this framework (they both, confusingly, called neoclassicals ‘classicals’). Keynes’ theory was reduced to two intersecting lines that looked a lot like demand-supply. The two schedules were derived from the equilibrium points of the demand and supply for money (LM), and the equilibrium points of the demand and supply for goods and services (IS). In order to reach ‘full employment’ equilibrium, the central bank could increase the money supply, or the government could expand fiscal policy. Unfortunately, such a glib interpretation of Keynes is flawed for a number of reasons:

First, Keynes did not believe that the central bank had control over the money supply:

…an investment decision (Prof. Ohlin’s investment ex-ante) may sometimes involve a temporary demand for money before it is carried out, quite distinct from the demand for active balances which will arise as a result of the investment activity whilst it is going on. This demand may arise in the following way.

Planned investment—i.e. investment ex-ante—may have to secure its ” financial provision ” before the investment takes place…There has, therefore, to be a technique to bridge this gap between the time when the decision to invest is taken and the time when the correlative investment and saving actually occur. This service may be provided either by the new issue market or by the banks;—which it is, makes no difference.

Since Hick’s model relies on a ‘loanable funds’ theory of money, where the interest rate equates savings with investment and the central bank controls the money supply, it clearly doesn’t apply in Keynes’ world. An attempt to apply endogenous money top IS/LM will result in absurdities: an increase in loan-financed investment, part of the IS curve, will create expansion in M, part of the LM curve. Likewise, M will adjust downwards as economic activity winds down. So the two curves cannot move independently, which violates a key assumption of this type of analysis.

Second, Keynes did not believe the interest rate had simple, linear effects on investment:

I see no reason to be in the slightest degree doubtful about the initiating causes of the slump….The leading characteristic was an extraordinary willingness to borrow money for the purposes of new real investment at very high rates of interest.


But over and above this it is an essential characteristic of the boom that investments which will in fact yield, say, 2 per cent. in conditions of full employment are made in the expectation of a yield of, say, 6 per cent., and are valued accordingly. When the disillusion comes, this expectation is replaced by a contrary “error of pessimism”, with the result that the investments, which would in fact yield 2 per cent. in conditions of full employment, are expected to yield less than nothing…

…A boom is a situation in which over-optimism triumphs over a rate of interest which, in a cooler light, would be seen to be excessive.

So, again, the simple, mechanistic adjustments in IS/LM are inaccurate. The magnitude of the interest rate will not change just the level, but also the type of investment taking place. Higher rates increases speculation and destabilise the economy, whereas low rates encourage real capital formation. This key link between bubbles, the financial sector and the real economy was lost in IS/LM, and also in neoclassical economics as a whole.

Third – and this is something I have spoken about before – Hicks glossed over Keynes’ use of the concept of irreducible uncertainty, which was key to his theory. The result was a contradiction, something Hicks noted in the aforementioned ‘explanation’ for IS/LM. The demand for money was, for Keynes, a direct result of uncertainty, and in a time period sufficient to produce uncertainty (such as Keynes’ suggested 1 year), expectations would be constantly shifting. Since both the demand for money, savings and investment depended on expectations, the curves would be moving interdependently, undermining the analysis. On the other hand, in a time period short enough to hold expectations ‘constant’ and hence avoid this (Hicks suggested a week), there would be no uncertainty, no liquidity preference and therefore no LM curve.

Hicks’ attempt to shoehorn Keynes’ book into his pre-constructed framework led to oversimplifications and a contradiction, and obscured one of Keynes’ key insights: that permanently low long term interest rates are required to achieve full employment. The result is that Keynes has been reduced to ‘stimulus,’ whether fiscal or monetary, in downturns, and the reasons for the success of his policies post-WW2 are forgotten.

Phillips and his curve

Another key aspect-along with IS/LM-of the post-WW2 ‘Keynesian’ synthesis was the ‘Phillips Curve,’ an inverse relationship between inflation and unemployment observed by Phillips in 1958. Neoclassical economists reduced this to the suggestion that there was a simple trade-off between inflation and unemployment, and policymakers could choose where to select on the Phillips Curve, depending on circumstances.

Predictably, this is not really what Phillips had in mind. What he observed was not ‘inflation and unemployment,’ but inflation and money wages. Furthermore, it was not a static trade off, but a dynamic process that occurred over the course of the business cycle. During the slump, society would observe high unemployment and low inflation; in the boom, low unemployment would accompany high inflation. This is why, if you look at the diagrams in his original paper, Phillips has numbered his points and joined them all together – he is interested in the time path of the economy, not just a simple mechanistic relationship. The basic correlation between wages and unemployment was just a starting point.

Contrary to what those who misinterpreted him believed, Phillips was not unaware of the influence of expectations and the trajectory of the economy on the variables he was discussing; in fact, it was an important pillar of his analysis:

There is also a clear tendency for the rate of change of money wage rates at any given level of unemployment to be above the average for that level of unemployment when unemployment is decreasing during the upswing of a trade cycle and to be below the average for that level of unemployment when unemployment is increasing during the downswing of a trade cycle…

…the rate of change of money wage rates can be explained by the level of unemployment and the rate of change of unemployment.

Finally, whatever Phillips’ theoretical conclusions, it is clear he did not intend even a correctly interpreted version of his work to be the foundation of macroeconomics:

These conclusions are of course tentative. There is need for much more detailed research into the relations between unemployment, wage rates, prices and productivity.

Had neoclassical economists interpreted Phillips correctly, they would have seen that he thought dynamics and expectations were important (he was, after all, an engineer), and we wouldn’t have been driven back to the stone age with the supposedrevolution‘ of the 1970s.

An irrational approach to Von Neumann

In microeconomics, the approach to ‘uncertainty’ (a misnomer) emphasise the trade-off between potential risks and their respective payoffs. Typically, you will see a graph that looks something like the following (if you aren’t a mathematician, don’t be put off – it’s just arithmetic):

Candidate Probability Home Abroad
A 0.6 300k 200k
B 0.4 100k 200k

The question is whether a company will invest at home or abroad. There is an election coming up, and one candidate (B) is an evil socialist who will raise taxes, while the other one (A) is a capitalist hero who will lower them. Hence, the payoffs for the investment will differ drastically based on which candidate wins. Abroad, however, there is no election, and the payoff is certain in either case; the outcome of the domestic election is irrelevant.

The neoclassical ‘expected utility’ approach is to multiply the relative payoffs by the respective probability of them happening, to get the ‘expected’ or ‘average’ payoff of each action. So you get:

For investing abroad: £200k, regardless

For investing at home: (0.6 x £300k) + (0.4 x £100k) = £220k

Note: I am assuming the utility is simply equal to the payoff for simplicity. Changing the function can change the decision rule but the same problem – that what is rational for repeated decisions can seem irrational for one – will still apply.

So investing at home is preferred. Supposedly, this is the ‘rational’ way of calculating such payoffs. But a quick glance will reveal this approach to be questionable at best. Would a company make a one off investment with such uncertain returns? How would they secure funding? Surely they’d put off the investment until the election, or go with the abroad option, which is far more reliable?

So what caused neoclassical economists to rely on this incorrect definition of ‘rationality’? A misinterpretation, of course! One need look no further than Von Neumann’s original writings to see that he only thought his analysis would apply to repeated experiments:

Probability has often been visualized as a subjective concept more or less in the nature of estimation. Since we propose to use it in constructing an individual, numerical estimation of utility, the above view of probability would not serve our purpose. The simplest procedure is, therefore, to insist upon the alternative, perfectly well founded interpretation of probability as frequency in long runs.

Such an approach makes sense – if the payoffs have time to average out, then an agent will choose one which is, on average, the best. But in the short term it is not a rational strategy: agents will look for certainty; minimise losses; discount probabilities that are too low, no matter how high the potential payoff. This is indeed the behaviour people demonstrate in experiments, the results of which neoclassical economists regard as ‘paradoxes.’ A correct understanding of probability reveals that they are anything but.

Getting it right

There are surely many more examples of misinterpretations leading to problems: Paul Krugman’s hatchet job on Hyman Minsky, which completely missed out endogenous money and hence the point, was a great example. The development economist Evsey Domar reportedly regretted creating his model, which was not supposed to be an explanation for long run growth but was used for it nonetheless. Similarly, Arthur Lewis lamented the misguided criticisms thrown at his model based on misreadings of misreadings, and naive attempts to emphasise the neoclassical section of his paper, which he deemed unimportant.

This is not to say we should blindly follow whatever a particularly great thinker had to say. However, indifference toward the ‘true message’ of someone’s work is bound to cause problems. By plucking various thinker’s concepts out of the air and fitting them together inside your own framework, you are bound to miss the point, or worse, contradict yourself. Often a particular thinker’s framework must be seen as a whole if one is truly to understand their perspective and its implications. Perhaps, had neoclassical economists been more careful about this, they wouldn’t have dropped key insights from the past.

, , , ,


Debunking Economics, Part VIII: Macroeconomics, or Applied Microeconomics?

Chapter 10 of Steve Keen’s Debunking Economics explores the reduction of macroeconomics to ‘applied microeconomics:’ representative agents, the macroeconomic supply/demand (IS-LM), Say’s Law and more. The Chapter is aptly titled ‘Why They Didn’t See It Coming’ – the reason, of course, being that the very premises of their models assumed away major episodes of instability.

Say’s Law

Say’s Law is the proposition – first put forward by its namesake, Jean-Baptiste Say – that:

Every producer asks for money in exchange for his products, only for the purpose of employing that money again immediately in the purchase of another product.

In other words: money is neutral, and the economy operates as if people are directly bartering goods between one another. Whilst individual markets may not clear, there cannot be a net deficiency of demand in all markets, and employment is largely voluntary, save perhaps that induced by ‘frictions’ as markets adjust. Say’s Law is rarely referenced explicitly by modern neoclassical economics, but it still lives on at the heart of many models – for example, the ‘equilibrium’ in Dynamic Stochastic General Equilibrium (DSGE) models assumes that all markets clear.

Keen notes that Keynes’ own formulation and refutation of Say’s Law was clumsy and turgid. Instead, Keen opts for Marx’s critique, which was far more concise and lucid. Keynes actually included Marx’s critique in his 1933 draft of The General Theory, but eventually eliminated it, probably for political reasons.

Say’s Law relies on a simple claim: the structure of a market economy is Commodity-Money-Commodity (C-M-C), where people primarily desire commodities and only hold money for want of another commodity. But Marx pointed out that, under capitalism, there are a group of people who quite clearly do not fit this formulation. These people are called capitalists.

Capitalist production does not take the form of C-M-C, but of M-C-M: a capitalist will invest money in production in the hope of accumulating more. As Marx put it, “[the capitalist’s] aim is not to equalise his supply and demand, but to make the inequality between them as great as possible.” Say’s Law could be said to apply in a productionless economy, but capitalism is characterised by the value or quantity of produced goods and services exceeding the value or quantity of the inputs. Hence, there will always be a surplus of money needed to satisfy capitalist accumulation, and the economy will continually be characterised by excess demand for money, and hence insufficient demand for commodities.

Keen continues by noting the obvious accounting reality that, in order for the economy to expand, credit must fill this gap, and quotes both Schumpeter and Minsky saying the same. This, along with the logic of capital accumulation, is a major spanner in the works for Say’s Law. However, I will not explore the ‘credit gap’ any further here, as Keen goes into far more detail in later chapters.


IS/LM is a diagram that looks a lot like demand supply, and proposes that the interest rate and level of output in an economy are determined by two schedules: the equilibriums between the different levels of investment and saving, and the equilibriums between the money supply and the desire to hold money (liquidity preference). It was originally proposed as an interpretation of Keynes’ General Theory by Hicks in his 1937 review of the book, entitled Keynes and the Classics.

There are many problems with IS/LM. The model was a complete misinterpretation of Keynes, and basically an attempt to pass off Hicks’ own model – that was developed independently from Keynes* – as Keynes’ model. Hicks himself pointed out many of the substantive problems in his 1980 ‘explanation’ (Keen suggests it is really an apology).

The major problems are uncertainty and changing expectations. Hicks’ formulation of IS/LM uses a period of about a week, during which it is reasonable to suppose that expectations are constant. But if expectations are constant and therefore not uncertain, there is no room for liquidity preference, which Keynes justified as “a barometer of the degree of our distrust of our own calculations a conventions concerning the future.” So we must extend the time period.

Keynes’ original intent for the time period of his analysis was the ‘Marshallian’ definition of a short period – about a year. The problem is that at this point equilibrium analysis falls apart. Both curves are partially derived from expectations, and once these start changing, the curves are constantly shifting. Not only this, but since they both depend on expectations, a movement in one will affect the other, and they can no longer move independently.**

Ultimately, IS/LM reduced Keynes to a call for fiscal stimulus in the ‘special case’ that the LM curve was flat or close to flat (demand for money is ‘very high’ or infinite). ‘Later Hicks’ argued that the model should really not be intended as anything other than a “classroom gadget” – we might consider it a heuristic assumption, later to be replaced by something else (it is, in fact, replaced by DSGE past the undergraduate level). Personally I’m not sure that a model with internal inconsistencies should be used as a heuristic (and neither is Keen), and to be honest students have enough trouble understanding IS/LM that I don’t even think it qualifies as a potent tool for communication. In any case, we certainly don’t want to be referring to it in policy discussions.

Macroeconomics after IS/LM

The neoclassical economists didn’t like IS/LM either, but that was because it was not built up from the point of view of optimising microeconomic agents. Keen catalogues the ‘Rational Expectations‘ overthrowing of IS/LM and the ‘Keynesians,’ making the obvious observation that the idea people can, on average, predict the future, is stupid, and ironically completely fails to take into account Keynes’ concept of uncertainty. He notes that, while the broad thrust of the Lucas Critique is correct, it does not justify the idea that a policy change will be completely neutralized by changes in behaviour, and neither does it justify reductionism – microeconomic models are as ‘vulnerable’ to the critique as macroeconomic ones.

Keen then documents that the first attempt to model macroeconomics based on the ‘revelations’ of what he calls the ‘rational expectations mafia:’ Real Business Cycle models. Again, the original author of these models – Bob Solow – later repudiated them. In his words:

What emerged was not a good idea. The preferred model has a single representative consumer optimizing over infinite time with perfect foresight or rational expectations, in an environment that realizes the resulting plans more or less flawlessly through perfectly competitive forward-looking markets for goods and labor, and perfectly flexible prices and wages.  How could anyone expect a sensible short-to-medium-run macroeconomics to come out of that set-up?

This is obviously ridiculous. There have, of course, been developments since the core RBC model was invented – the New Keynesian DSGE models include elements such as sticky prices, bounded rationality, imperfect (though as far as I know, not  asymmetric) information and oligopolistic market structures. However, all of these preserve the neoclassical core of preference driven individualism, assume equilibrium, and keep one or two representative agents (any more and many of the core assumptions fall apart, in a clear and ironic example of emergent properties). The models also suppose that the economy has ‘underlying‘ tendencies towards stability, masked only by the pesky aforementioned real world ‘imperfections.’

Even despite all these developments, neoclassical economists continued to be led to absurd conclusions, such as the idea that unemployment during the Great Depression was voluntary (Prescott), the recession predated the collapse of the housing bubble (Fama), and blaming business cycles on the Fed suddenly deviating from its previous mandate for no reason (Taylor, Sumner, Friedman). And, of course, none of them foresaw the crisis and can only model it with some serious post-hoc ad-hocery.

It’s worth noting that representative agents, in and of themselves, are not a problem – the problem is that neoclassicism must stick to a small amount to preserve its assumptions, and for some reason refuses to use class as a distinction. Keen’s own models could be said to use representative agents, but the fact that he doesn’t build his model up from microfoundations means that he is far less hamstrung when adding new aspects and dynamics to the model. The idea that the macroeconomy cannot be studied separately from the microeconomy is deeply ascientific – the kind of thing that the real sciences learned to abandon long ago. It’s time economists caught up.

*Actually it was developed largely in opposition to Keynes – by Hicks, Dennis Robertson and others.

**Readers might notice a similarity between this ‘small scale versus large scale’ critique of IS/LM and Piero Sraffa’s argument against diminishing marginal returns.

, , , , , , ,


Are Static Neoclassical Models Useless?

If somebody presented you with a static snapshot of weather patterns, it would be clear that the model was fairly useless; as it didn’t capture dynamics, it would have nothing to tell you about the weather. Similar problems apply to neoclassical models: once you attempt to incorporate dynamic events, they don’t just become ‘wrong’, they become completely irrelevant.

Comparative Advantage

I posted recently about how CA is irrelevant for developing countries, but I’d like to expand: it is completely irrelevant for arguments about protectionism. The problem is that if you take into account the effects of tariffs on the productivity of the industries they are aimed at, it has nothing to say – not just for developing countries, but for any industry whatsoever. CA assumes that every country has an innate productive capacity in each industry that does not change over time. If you froze the world, CA might be a persuasive argument for free trade, but in a dynamic economy it completely irrelevant.


Say the price of a necessity goes up due to a supply shortage. Modelling this as a simple ‘price increase’ suggests that demand would go down. However, if people are expecting the supply shortage to continue or worsen, then isn’t it more probable that demand will go up? Mainstream economists might have an answer: the price increase can be modelled as a movement of the supply curve, whilst the new information about the supply shortage can be modelled as a movement of the demand curve:

(D1 to D3, S1 to S2)

Problem solved. Except this movement of the demand curve leads to a higher price, which in turn would cause people to alter their expectations of the shortage, leading to another movement, and so forth. This may be a highly specific example, but it touches on a central Sraffian criticism of these models, which is that the curves cannot move independently; a change in one creates ripple effects that violate ceteris paribus. Thus, taking a picture of the state of them at any one time tells you as much as a photo of a moving train tells you its velocity.


Both ‘curves’ are partially derived from expectations – one from expectations of returns on investments, and one from expectations of future needs for liquidity. Therefore, a similar criticism to Demand-Supply applies – movement of one curve alters expectations and so affects the other. This creates a feedback loop that simply cannot be captured by two intersecting curves. At any one moment, the diagram might be said to be ‘right’ (putting aside other objections), but this doesn’t mean it is useful.

I expect economists won’t appreciate a whistle-stop tour of their models that claims to have debunked them, but at the same time I expect they’d agree that the above ‘weather’ example would so obviously flawed that it would not need to be refuted formally. In order to avoid special pleading, economists will have to argue that the economy is at or close to equilibrium, rather than a dynamic system. I do hope nobody claims this after 2008 (or the recurrent crises for centuries before that).

, , , ,