In my talk with John Cochrane last week, he took some positions with which I disagree, but in the interest of time I did not argue. One of those positions (I will get to the other one in a different post, but it is related) is that mathematical modeling in economics is very constructive. This issue got batted around recently on Twitter, which I found out by reading the latter part of Noah Smith’s post. Smith writes,
literary treatments of economic phenomena are usually very vague. This can cause them to function as Rorschach tests; each person can interpret Keynes or Minsky or Hayek to mean something a little closer to their desired conclusion. Mathematical modeling precludes that.
But mathematical modeling is also vague. It is vague in how the model connects to the real world. Martin Bronfenbrenner once talked about a missing “applicability theorem.” To what real-world measurements does your theory apply? The answer often rests on claims that are as vague and tenuous as any literary paper. For example, the famous Solow growth model is disconnected from the real world to the extent that the real world does not present us with K and L, which stand for capital and labor in the model. There are too many varieties of capital and labor. Piketty’s famous “r>g” (return on capital greater than economic growth) becomes hopelessly tangled in measurement controversies, particularly concerning r.
The bottom line is that clear writing in economics is rare. It is rare in articles without equations. It is rare in articles with equations.
On the anti-math side, one argument is that the world is too complex to be reduced to equations. Equations “leave out” too much. Smith writes,
On the other hand, the cost of mathematical modeling is that it’s often mathematical tractability, rather than realism, that drives the modeling choices. In other words, bank runs don’t happen exactly like Diamond & Dybvig wrote; they just chose assumptions that made the math relatively easy to solve. This is why models like the ones that one the Nobel this year shouldn’t be regarded as the way financial crises actually work, but simply as ways that they might work. Nailing down exactly how they do work is a task that will be very difficult and take a very long time.
This is true also. But it is also true of economic analysis that does not contain equations. Regardless of whether you use prose or equations, you have to simplify.
Using math can help you avoid making claims that are internally inconsistent. For example, you cannot claim that a perfectly competitive firm in long-run equilibrium is producing at a point where marginal cost is greater than the minimum of average cost. For a perfectly competitive firm, price equals marginal cost, and in the long run price equals average cost. A little calculus shows that marginal cost only equals average cost at the minimum of average cost.
Overall, what Cochrane likes about math is exactly what I don’t like. That is, when someone comes up with an elegant model, economists flock to using that model. Cochrane sees this piling on as adding to the insights of the seminal paper, proving its value. I see it as playing the game of maximizing the chance of getting your paper published, for which this strategy works really well. I see the inevitable consequence being a cumulation of a resulting literature that is ridiculous. The Overlapping Generations Model is a particularly salient example—constantly re-used without a credible application theorem.
Maybe even if you took out math, economists would still behave like lemmings, lavishing attention on one cleverly-expressed idea and overlooking its shortcomings. But my guess is that the academic conversation in economics would be much improved without the math.
A few years ago, National Affairs published an essay in which I put together my thoughts on the state of economic methodology. My conclusion:
Young economists who employ pluralistic methods to study problems are admired rather than marginalized, as they were in 1980. But economists who question the wisdom of interventionist economic policies seem headed toward the fringes of the profession.
In this respect, the barriers to effective theory in economics are different and perhaps more worrisome than was the case in 1980. The contemporary state of economic theory reflects a broader crisis in the social sciences and a deepening cleavage between the college campus and the rest of society.
The role of institutions is largely invisible to models. One can interpret "institutions" as "technology" in Solowian growth, but insofar as access to better institutions is in part an ideological decision rather than a capital investment in innovation, then "better technology" is not merely a matter of investment. Advocates of prosperity must push back against anti-capitalist ideologies and advocate explicitly for property rights, rule of law, and economic freedom. Insofar as the Chinese SEZs were modeled on Hong Kong and Singapore, their models had been available for replication much earlier, but it took a decision by Chinese leaders to pilot the SEZs. Lee Kuan Yew to the rescue!
Of course, the SEZs then led to China's massive growth. What model would or could have predicted that growth (unless institutional changes such as SEZs were assumed in the models as exogenous factors, distinct from the mechanics of the model)?
Similarly, insofar as experiments with new jurisdictions and zones is an active field globally, growth models will be blind to the potential impact of such new jurisdictions.
Insofar moving from global poverty to prosperity necessarily involves institutional innovation, and insofar as mathematical models completely miss institutional innovation (at least at present, and please point me to any models that do incorporate institutional innovation), then mainstream economics based on mathematical modeling is missing the most important moral and practical issue pertaining to economics that matters for humanity.
Romer, of course, deserves considerably credit for advocating for charter cities. He basically saw this. It is a pity that his example has not been more widely followed among economists. Here is a piece he wrote which is much better than his well-known TED talk,
https://www.cgdev.org/publication/technologies-rules-and-progress-case-charter-cities
"Using math can help you avoid making claims that are internally inconsistent. For example, you cannot claim that a perfectly competitive firm in long-run equilibrium is producing at a point where marginal cost is greater than the minimum of average cost. For a perfectly competitive firm, price equals marginal cost, and in the long run price equals average cost. A little calculus shows that marginal cost only equals average cost at the minimum of average cost."
I feel like you missed an opportunity here to point out how internal consistency is almost entirely besides the point, at best necessary for a good model, but nowhere near sufficient. One could start with how "a perfectly competitive firm" is something that does not exist, nor can exist, in the real world, and is in fact a construct to make the math easy. One could then step into how demonstrating that the model is correct, actually checking to see if all these firms that can't exist are charging the same price that also equals their lowest average cost, is also impossible as defining cost in the real world is terribly difficult. (Ask an accountant about activity based costing sometime.)
So we have a mathematical model about the behavior of firms that can't exist and how they price goods that we can't test. Yes, the model is internally consistent, but only because we have defined
'perfectly competitive firms' such that they behave the way the model suggests they should. In other words, the model is a mathematical tautology, telling us 2 +2 = 4 because 4 = 2 + 2.
What have we gained? A false sense of insight, and a false sense that we should be able to look at the world and say "aha! These firms are acting contrary to the model! We should do something!"