Links to Consider, 7/3
Tyler Cowen on Meta-Science; Noah Smith on economic pessimism; Kling and Mingardi on human evolution and agency; Caplan's "ideological Turing test" tested
In just about any scientific literature, there is an undercurrent of tacit knowledge which is not very directly expressed in any of the published pieces. That knowledge may cover the following issues, among many others:
1. How the rules of the conversation operate, and how a body of literature on a question coheres.
2. Why certain papers and methods are not taken seriously any more
…Many of the people who claim the mantle of science might cite published papers, but in fact they have little or no understanding of science as a conversation and a body of literature.
One phenomenon that interests me about economics is that ideas get taken out of the context in which they were developed. For example, there are many uses of the “overlapping generations model,” but does anyone remember what the motivation was for developing the model in the first place?
I posed the following prompt to ChatGPT4:
As a historian of economic thought, explain the motivation behind Samuelson's "exact consumption loans" or overlapping generations model
which responded in part
One key aspect of Samuelson's model is the introduction of consumption loans. In the traditional economic models, individuals are assumed to consume only out of their own current income. However, Samuelson recognized that individuals might have different income profiles throughout their lives and that borrowing and lending can facilitate consumption smoothing over time. By incorporating consumption loans, he was able to examine the intertemporal allocation of resources and the effects of interest rates on consumption and saving decisions.
Subsequently, economists have used this model in a variety of ways, not so much because it fit the problem but because it was “there” in the literature. The most ridiculous (in my view) is treating it as a money demand function.
In macroeconometrics, why did “vector autogression” become legitimate? Why did “calibration” become legitimate? Why did input-output analysis fall out of favor and get replaced by the production function? Why did multiple regression fall out of favor in macroeconomics but become standard for explaining differences in growth across countries?
I read Tyler as saying that there is important information lost about a theory when you do not examine the process that led up to it.
If there’s one economic measure that has been unprecedentedly bad in the years since the pandemic, it’s real wages. Since the end of 2020, real hourly compensation has fallen by more than it has in America’s entire postwar history. Not even in the inflation of the 1970s or the Great Recession of the late 2000s and early 2010s did compensation fall so much
If this were a GDP factory, then this would mean that workers are getting a smaller share of GDP. That would show up as a sizable increase in the profit share.
But I think that part of it is a shift in the mix of workers toward more less-skilled, lower-paid workers. That drives down the average of nominal GDP per worker, possibly explaining the measured fall in productivity and real wages.
Alberto Mingardi and I discussed Michael Tomasello’s The Evolution of Agency. The energy picks up during the question period, about half way through.
C.O. Brand and others put Bryan Caplan’s “ideological Turing test” to a scientific test.
An argument has “passed” the Ideological Turing Test if it is rated as highly, if not higher, than the average arguments provided by proponents of that view, rated by those proponents. This ‘relative’ criteria takes into account the agreement within proponents of the same argument, as a baseline to compare opponent ratings. We developed this measure with three separate, often polarising topics; Brexit, Covid-19 vaccinations, and veganism. We found that, when asked to give three reasons for and against their position, participants unsurprisingly agree with reasons provided by their own ideological proponents far more than those provided by their ideological opponents. On the whole, participants from both sides, across all topics, were equally “bad” at passing the relative criteria, however there was variation in the pass-rate between topics. Only around 54% could pass within the topic of Covid-19 vaccinations, whereas around 71% passed in the topic of veganism, with Brexit achieving around a 64% pass rate for both sides.
One interesting result:
Against our pre-registered predictions, we found no evidence that passing ITT is predicted by self-reported time spent actively researching the topic
This is actually consistent with the motivated-reasoning literature. Delving deeper into a topic does not make you more objective about it.
Pointer from Tyler Cowen.
Substacks referenced above:
@
“Subsequently, economists have used this model in a variety of ways, not so much because it fit the problem but because it was “there” in the literature.”
I would guess that unless you are at the very top of the hierarchy it is between difficult and impossible to get anyone to take the time to learn and understand a model they aren’t already familiar with. Hence all the questionable applications of things everyone was forced to work through a bunch in first year macro.
Arnold
Responding to Tyler’s comments on science.
I’ve read somewhere (forgot where) that ‘science’ really isn’t a valid idea.
Why?
There is biology, physics, chemistry, geology, etc.. However, what is science ?
I do remember clerk maxwell writing that reality maybe composed like a series of magazine articles, not as one continuous story like a book.
‘Science’ as a word or idea seems just to borrow status from technology/mathematics/ etc., and use it for benefit of other purposes.
As Hayek and others explained, social researchers have physics envy. To their profound detriment.
Thanks
Clay