Robert Wright interviews me. The central topic is my Fantasy Intellectual Teams idea. I am more animated than usual. I don’t think that either of us were audience-focused. It was more like a conversation between the two of us, to which you can listen in.
Bob and I share a concern about the rise of tribalism. FITs is a rather exotic idea to try to combat it, but in the podcast I explain the rationale for it and why the scoring system I set up is clever (in my opinion). Listen carefully and take notes.
I agree with the Substack team that Society has a Trust Problem. I say that people decide what to believe by deciding who to believe.
As an aside, I would say that Scott Alexander’s attempt to find an evolutionary explanation for motivated reasoning is on the wrong track, because it treats reasoning as an individual problem and not a social one. My hypothesis for motivated reasoning goes like this.
We come to admire someone. One person might admire Rogan. Another person might admire Fauci.
We wish to be admired by those that we admire. If you admire Rogan, you imagine how good it would feel to have Rogan call you a good guy.
So when Fauci says something you don’t think Rogan would like, you are motivated to view it in a negative light, to give it a negative spin. And when Rogan says something you are motivated to give it the most favorable interpretation, to give it a positive spin. Even if the issue is one in which the most credible experts (like the Zvi, or Emily Oster) happen to side with Fauci.
Hence, motivated reasoning. It arises from our desire to imagine ourselves admired by the people we admire.
In the conversation with Wright, I use the Rogan/Fauci to make a different point, about the heuristics that we use in choosing what to believe. I say that concerning the virus, the 20th century heuristic was to believe the authority figure, the person with the credential, the person with the institutional affiliation. Fauci. The 21st century heuristic is to believe the entertainer, the “edgy” person, the person who can attract a following. Rogan. I say that I believe neither, and instead the scoring system for Fantasy Intellectual Teams was an attempt to come up with a better heuristic. The scoring system raises the status of Emily Oster and Zvi Mowshowitz.
Tribalism is such that many people use the heuristic of believing someone based on political leaning. This is not a good heuristic.
I explain how pundits can be judged on how well they: demonstrate understanding of both sides of arguments; evaluate research critically even when results support your side; formulate opinions in terms of well-defined outcomes and state beliefs about likely outcomes in probabilistic terms.
Near the end, Bob gives a perspective on Substack that I had not considered. I like to think of Substack as the anti-Twitter. But Bob argues that Substack is actually parasitic on Twitter. Substack gave some of the most popular tweeters a way to monetize their followings. And a lot of Substack traffic comes from Twitter.
Six months after I suspended the FITs project, some potential investors have expressed interest in it. So why am I spurning them?
Part of it is that I don’t have the mentality of a fund-raising entrepreneur. I feel guilty taking someone else’s money to put into a project that I am afraid could easily fail.
Another (better?) excuse is that for me personally, the interesting part of the project is over. I wanted to come up with a good set of scoring categories for intellectual writing. With version 2 of the project, I was happy with the scoring categories that we used.
What remains to be done is not so interesting and not so much in my skill set. The next step would be to hire and train people to score essays and podcasts. And one would need to hire people to carry out a marketing and public relations campaign that gets Fantasy Intellectual Teams a lot of mindshare.
Excellent points about motivated reasoning. Thanks for explaining why you've reduced your FITs idea.
"What remains to be done is not so interesting and not so much in my skill set. The next step would be to hire and train people to score essays and podcasts."
I don't think so. The Beta-1 (May) version of FITs has the key non-scalable problem of "who scores"?
I have lots of brainstorm ideas, which I'll add here in the coming days. But the key one is that some group of 8 - 12 "owners" should be in a club, and draft intellectuals, and score them themselves. They have to promote the ones on their team AND score whether those promoted by the other owners are worthy.
I'm now thinking your great one-time scoring system is too detailed - sort of like how Apple took the 3-button Xerox Spark mouse (I played on some at Stanford!) and made it a one button (sometimes double click) mouse.
Twitter & FB use the too crude "like". But easy. And scalable.
We (FI club owner promoters of Teams) need an easy way to score other owners' intellectual posts. Usually two, who must be close enough or else a third is called in?
We need more revised rules, and more Beta tests for club-scoring. The "investors" could provide money prizes (starting at $1,000?) plus bragging rights to get a LOT of effort from owners in clubs to score the essays, posts, podcasts (please more transcripts).
FITs is a GREAT idea. Maybe you're getting too old, too jaded?, to busy IRL to live so much with URLs.
Arnold Kling's "hypothesis for motivated reasoning" is unnecessarily roundabout; and its 2nd step is implausible insofar as any desire, which a 'person in the street' might have, to be admired by a remote public figure whom she admires, normally isn't strong enough to motivate her reasoning.
An alternative -- and, I think, more plausible -- hypothesis for motivated reasoning goes like this:
1. I (who am nobody) admire public figure X. You (who are nobody) admire public figure Y.
2. It bothers me when Y criticizes X, because:
2a. I admire X.
2b. I want X to have high status.
2c. I want to bask in X's high status.
2d. I want to be right and so you to be wrong. You and Y being right would lessen me.
2.e. If I am tribal, and if my tribe, too, admires X, then by extension you and Y being right would lessen also my tribe.
None of this requires me to want remote public figure X to admire me.