On February 21, about twenty of our paid subscribers joined me in a Zoom conversation with Kurt Gray, director of the Deepest Beliefs Lab. Here are a few of my notes from the discussion.
Contra Jonathan Haidt, Gray believes that harm-avoidance is the essence of moral reasoning. That is, if someone says that “X is wrong,” you can be pretty sure that the person will say that “X causes harm.” People can differ over the amount of harm that X causes, and then arrive at different moral conclusions. But if they agree that X causes a lot of harm, then they will agree that X is wrong.
In the book The Mind Club, co-authored with the late Daniel Wegner, a moral situation is one in which an agent causes harm to a patient. An agent is someone perceived as being able to think and plan. A patient is someone perceived as being able to feel, especially to feel fear and pain. A human being can be both an agent and a patient, but in a moral situation we tend to separate the two. The example that I use is that we consider Derek Chauvin the agent and George Floyd the patient.
People in positions of power tend to be viewed as agents in this framework. This relates to Martin Gurri’s “revolt of the public.” Because elites no longer control information, the public perceives the “little guy” as the patient victimized by the agent. Institutions lose support, because we cannot see the leaders of institutions as well-meaning and fallible—instead we see them as evil.
Gray believes that to bridge political divides, people need to hear stories from the other side. Gray co-authored a paper that says,
In moral and political disagreements, everyday people treat subjective experiences as truer than objective facts.
In particular, if you tell a story how you avoided harm by doing something, this will at least get someone on the other side to see your position as rational from your point of view. For example, if you are against gun control and tell a story of how carrying a gun enabled you to avoid harm.
Another way to see the other side as rational and human is to see a list of positions on which you agree. It turns out that most people expect people on the other side to be worse than they really are. They might think that 15 percent of people in the other political party are ok with child pornography, when in fact it is less than 1 percent. So showing a Democrat a list of all things that nearly all Republicans agree are wrong (murder, slavery, etc.) you can enable Democrats to see Republicans as human. Of course, partisan spokesmen try to do the opposite—too instill the belief that the other side has no moral beliefs whatsoever.
Gray sees Twitter as useful for academics as a way of disseminating their work and discovering the work of others. But it causes harm in the general population because its algorithm is tuned to increase engagement, and engagement goes up when people are outraged.