The World Bank’s Chief Economist Says What the Problem Is

I don’t often read what Paul Mason writes, but a recent essay at  points to an interesting draft paper by Paul Romer, The Trouble With Macroeconomics . Romer is an academic economist and sometime entrepreneur who is now Chief Economist of the World Bank.

To me, there is an interesting part and an uninteresting part to the paper. The uninteresting part is the main theme. He takes his cue from Lee Smolin’s 2007 book The Trouble With Physics, which I read in 2008.

Let’s do the interesting part first. It is all about causality. Romer says that much of macroeconomic modelling is based on assumptions which are increasingly hidden. And, unfortunately, how the sophistication of the hiding is lending unwarranted plausibility to the models. He starts by asking how distinguished macroeconomists can get the world so wrong. For example, many have said that monetary policy ultimately has little effect on the rate of inflation. He uses a clear example, the “Volcker Deflation”, named after the then-Chairman of the Fed, to show that monetary policy in that case had a clear causal 500-basis-point influence on the rate of inflation. So how can people say it doesn’t?

The way that assumptions of causality are hidden is, generally, that general equations are proposed, and then fit to data, but that the number of (possibly-)independent variables in the models is way higher than can be mathematically solved for. Should I rather say “waaaaaay higher”? For example, seven equations, and fifty variables?

He details some specific ways in which this happens, with Real Business Cycle (RBC) models and their extensions using ideas from Dynamic Stochastic General Equilibrium (DSGE) theory. Big words, but the examples are somewhat easier than the words – simple and plausible. He demonstrates some subtle ways in which causal assumptions are hidden.

I imagine that it is well-known amongst economists that you can’t solve for even one of fifty variables in seven equations without fixing the values of most of them. I mean, they must have learnt that in their basic undergraduate maths courses. I take the power of Romer’s critique to lie in his laying bare how futile the approach can be, by showing not only how modern-day model assumptions can gain prima facie plausibility when hidden well, but also how easily they can be tweaked to yield a desired result. I found myself intrigued and entertained, and very glad I’d read the paper. I’m guessing you will be too.

So let me move to the “uninteresting” part.

Smolin’s book on modern-day particle physics is largely a critique of a dogma, string theory, and how it is not helping physics advance in a way Smolin (and lots of others) would like. The main technical problem, as he puts it, is that string theory is a construction which is resilient (some might prefer to say “impervious”) to judgement through available data. String theorists will tell you that the experiments that could say whether their main contentions are right or wrong are well beyond current experimental capability, and that string theory adapts straightforwardly to any current experimental results. Naysayers (of whom I know a few, casually) will say that this makes the theory useless for finding out anything about the world. They might point to Karl Popper’s suggestion that true science and the pursuit of knowledge is about making claims that can be falsified, and then testing them out.

Popper, though, ignored the plain fact that even experimental physicists spend 90% or more of their work time performing mathematical calculations, and good mathematical claims can’t be falsified (otherwise they’re not good). Indeed, that’s the purpose of (good) math. His falsification test goes no way to explaining how good science can be so heavily dependent on a form of knowledge whose very claim to efficacy is that it cannot be falsified.

String theory in principle could be like that; the non-falsifiable bit that underlies the rest. But it doesn’t seem to be so (at least, not after reading Smolin). In contrast to math in general, many particle physicists pay no attention to string theory and do not suffer. Smolin characterises the social features of string theory and its adherents, points out they form a closed group which cross-validates its own advances, and accommodates criticism by absorbing it internally. And that it cannot tell him much of what he wants to know about the constitutuents of material stuff and the world we live in.

The problem with summarising Smolin’s thesis here is that my attempt is necessarily brief and necessarily superficial. The wider problem with Smolin’s observations as critique is that it accommodates pretty much all areas of intellectual endeavour (even anti-intellectual endeavour). Similarly, much of Romer’s paper concerns how respected economists interact socially to accommodate intellectual disagreement. And, similarly, people do much the same all over.

Here are some examples from one of my areas. Thirty years ago, my work was most useful in the burgeoning field of Artificial Intelligence, whose name was coined by the great John McCarthy . Bert Dreyfus was the nemesis of most AI people, because of his critique What Computers Can’t Do, now called What Computers Still Can’t Do (here’s a review by John Haugeland ). Another critic was John Searle, whose Chinese Room example showed that, whatever computers were doing with symbols, they were not understanding in the sense in which AI people claimed computers were on their way to artificial understanding .

Dreyfus’s original critique addressed early work in AI, for example Newell and Simon’s General Problem Solver (a very rudimentary propositional-logic manipulator – of course, it was sixty years ago, in 1957, the very first one!). Simon claimed that within ten years, a computer would be world champion in chess, prove a substantial and novel mathematical theorem, and that most theories in psychology would be computer programs (according to ).

Dreyfus displayed those claims, and others used inter alia to obtain US government funding for the research, and rubbished them. Of course he was right. The chess goal took forty years (1997), not ten, and was largely dependent on inventive use of copious amounts of memory of games, as well as other techniques foreign to GPS – and almost no logic, and most certainly nothing that Dreyfus had pointed out that computers could not do. Computers have proved novel theorems, but in very specific areas of algebra, not exactly in the mainstream as the main journals (some work in the well-regarded group at Argonne in the 1980’s and 1990’s may be found at ). And we are totally not into psychology as computer programs, and long may that last. But Dreyfus was seduced into predicting what could and could not be done. He said: chess, no; Go, also no. And of course he’s now wrong, too.

I attended a talk of Dreyfus to a group of AI people at Stanford in the 1980’s, and witnessed what seemed to me like an intellectual mobbing. It was very unenlightening, despite the capabilities of those present. Socially, it was eye-opening, as well as embarrassing (to me). It’s not just economists who engage in semi-public brouhahas.

Dreyfus is right that computers couldn’t do those things he said they couldn’t do. But wrong on his specific predictions about games. His AI critics were right that computers were going to do some of those things anyway. Searle is right that computers can’t understand in the way in which he means it. His AI critics are right that computers are going to be doing things that many of us are happy to call “understanding” (maybe the meaning of the word will change?). Smolin is right that string theorists will carry on obliviously, and Romer that economists will carry on hypothesising their way out of unsolvable modelling problems. All makes for good gossip, don’t it?

Leave a Reply