Re: "We tend to have an asymmetric view: I think for myself. People who disagree with me are dupes of others. But pretty much everyone takes this stance, which means that it can’t be right. The true story is that all of us depend on 'complex chains of trust and testimony.'”
An educated person is more prone than the person in the street to think herself the personification of reason. Precisely because she is more skilled and practiced at argument, she is abler at camouflaging bias or interest or status as truth, even to herself.
And perhaps the person in the street is more likely to be aware that "deciding whom to trust" is a crux of politics.
Robert Trivers wrote something very similar in The Folly of Fools:
“Although the critical evidence is lacking for adult humans, smarter ones, as we saw in monkeys and apes, are expected to practice more deception, not less, and more skillfully. By theory, they are also expected to be more self-deceived than the less gifted. This creates special dangers - high intellectual ability combined with high self-deception - for example, a malevolent person who is good at being malevolent. It is easy for the intellectually gifted to argue otherwise, that their special talents will save them from the failings lesser mortals are prone to, but by evidence and logic, we expect the opposite. Until shown otherwise, we should assume that the intellectually gifted are often especially prone to deceit and self-deception, including in many of the academic disciplines they produce."
I agree. The first few chapters were pretty good, where he described his own research. Then he strayed into topics that were not his areas of expertise, particularly on economics and incentives.
Why does a "truly" educated person tend to believe they understand others better than they understand themselves? Why can't one be wrong in that belief? Maybe others don't come to their beliefs based on who to believe as much as a "thoughtful" person believes they do.
This is a timely essay for me. I have been bothered a good bit by the formulation of “we decide what to believe by deciding who to believe” but couldn’t put my finger on why till last night. It isn’t the part about the long chains of trust and testimony, but that people don’t really have beliefs over most subjects. That is to say, I don’t think people actually hold beliefs on a variety of subjects the way they hold beliefs about whether their car is still in the driveway or whether it is currently raining; their so called but not real beliefs don’t change their behavior in a way that predicts when their beliefs change.
I think it might be more true to say people believe in particular people and follow what they say to say they believe. An analogy would be that my daughters don’t have a car, but they follow me into the car and thus wind up at the same destination. If I change destinations so do they, often without noticing or caring much, so long as we arrive at the same place.
This makes everyone sound so unmoored, like they’ve no conception of the good, of their own, or no ability to judge for themselves what advances it or injures it. Perhaps in that case it hardly matters what or whom they “believe”.
"Pundits and aides vouched for President Biden’s cognitive health, until they didn’t."
All that vast intelligence and long, long, very long, education and they can't figure out their great leader has dementia. Virulent! Unfair! Those people clearly chorused Let's Go Brandon! Most amusing.
So I asked my browser AI "How should I decide whether to believe something?"
It produced:
"Deciding What to Believe
When deciding whether to believe something, consider the following factors:
Evidence: Is there credible, verifiable evidence to support the claim? Look for multiple sources, including primary sources whenever possible.
Consistency: Does the claim align with established facts, scientific consensus, and logical reasoning?
Expert Opinion: Are experts in the relevant field endorsing or rejecting the claim? Consider their credentials and areas of expertise.
Motivation: Are there potential biases or motivations behind the claim, such as financial gain, political agenda, or personal interest?
Context: Consider the context in which the claim is being made. Is it part of a larger narrative or agenda?
Internal Consistency: Does the claim make logical sense within itself? Are there any contradictions or inconsistencies?
Peer Review: Has the claim been subject to rigorous peer review and scrutiny by experts in the field?
Practical Steps
Take your time: Don’t rush to believe or dismiss something without careful consideration.
Verify information: Cross-check claims with multiple sources, including primary sources whenever possible.
Evaluate sources: Assess the credibility and reliability of sources, considering factors like expertise, bias, and motivation.
Consider alternative explanations: Look for alternative explanations or hypotheses that might better explain the phenomenon or data.
Be open-minded: Remain open to new information and perspectives, but also be cautious of confirmation bias.
Reflect on your own biases: Recognize your own biases and strive to set them aside when evaluating information."
Not too bad. I especially like "Take your time: Don’t rush to believe or dismiss something without careful consideration." How often do we really need to make such decisions?
And isn't the evidence always going to be more important than the source? Maintaining curiosity seems like a low-risk strategy.
re/ #5: Whenever I’m in a discussion, I ask others and myself, “What would it take to change your mind?” If a person flounders here, it’s worth exploring whether we’re going to have a fruitful discussion. Or are we exchanging religious views.
I believe an important mode is that of trusting statements that follow from certain frameworks of logical thought that one has independently verified. In particular, by studying probability, Bayesian logic, some formal logic/rhetoric, and game theory, I have verified the foundational statements and proofs. I try to evaluate statements -- by whomever -- based on how they follow these frameworks and reflect the constraints these impose. In some cases, people I would distrust based on past performance will on occasion use such a framework even if only that it helps them in their specific case -- in which case, hard as it is, I may agree . Example 1 : Krugman, though he is a master at taking a desired conclusion, backing it up through the logic tree to some more innocuous sounding premises, then pretends to reason forward until 'et voila'). Example 2: Scott Alexander, I generally trust -- he generally is applying a Bayesian framework. But I can analyse the arguments and often end up disagreeing with him, but the disagreement can be traced to axiomatic inputs. So, "what to believe" is not always "who to believe".
You could go meta and say that this just means trusting "some dead guys" on process or it is covered by a subset of the above rules you list, or, ok, after long-winded analysis it just means I then get to decide whose "axioms" to trust. Yet, I think it is a deeper than that -- in that there is some degree of independent verification of truth possible that can be a significant mode of looking at the world (the other heuristics (?) above needed for functioning due to limited CPU, no question). Yes, I cannot take Fermat's theorem as true based on independent verification, but most arguments on the politics/social_issues seldom get beyond requiring basic Type I/Type II analysis, sample error bounds, symmetry arguments, Arrow's theorems or knowledge of Bayesian imposed constraints, all of which can be taught at high school level. And the "axioms" are also usually quite simple, and can be yes/no'd based on actual experience (lot of the trans claims and education research reasoning).
Likely being an engineer/self selected circle makes me biased, but I do not think making decisions based on a framework of reasoning from first principles is that rare.
Re: "We tend to have an asymmetric view: I think for myself. People who disagree with me are dupes of others. But pretty much everyone takes this stance, which means that it can’t be right. The true story is that all of us depend on 'complex chains of trust and testimony.'”
An educated person is more prone than the person in the street to think herself the personification of reason. Precisely because she is more skilled and practiced at argument, she is abler at camouflaging bias or interest or status as truth, even to herself.
And perhaps the person in the street is more likely to be aware that "deciding whom to trust" is a crux of politics.
Robert Trivers wrote something very similar in The Folly of Fools:
“Although the critical evidence is lacking for adult humans, smarter ones, as we saw in monkeys and apes, are expected to practice more deception, not less, and more skillfully. By theory, they are also expected to be more self-deceived than the less gifted. This creates special dangers - high intellectual ability combined with high self-deception - for example, a malevolent person who is good at being malevolent. It is easy for the intellectually gifted to argue otherwise, that their special talents will save them from the failings lesser mortals are prone to, but by evidence and logic, we expect the opposite. Until shown otherwise, we should assume that the intellectually gifted are often especially prone to deceit and self-deception, including in many of the academic disciplines they produce."
Ironically, the book is in many places good evidence for the truth of that passage--for he, a brilliant man, writes a lot of ridiculous things in it.
I agree. The first few chapters were pretty good, where he described his own research. Then he strayed into topics that were not his areas of expertise, particularly on economics and incentives.
Why does a "truly" educated person tend to believe they understand others better than they understand themselves? Why can't one be wrong in that belief? Maybe others don't come to their beliefs based on who to believe as much as a "thoughtful" person believes they do.
This is a timely essay for me. I have been bothered a good bit by the formulation of “we decide what to believe by deciding who to believe” but couldn’t put my finger on why till last night. It isn’t the part about the long chains of trust and testimony, but that people don’t really have beliefs over most subjects. That is to say, I don’t think people actually hold beliefs on a variety of subjects the way they hold beliefs about whether their car is still in the driveway or whether it is currently raining; their so called but not real beliefs don’t change their behavior in a way that predicts when their beliefs change.
I think it might be more true to say people believe in particular people and follow what they say to say they believe. An analogy would be that my daughters don’t have a car, but they follow me into the car and thus wind up at the same destination. If I change destinations so do they, often without noticing or caring much, so long as we arrive at the same place.
"We should trust people who do not dismiss contrary evidence"
My hack: Trust people who GIVE contrary evidence against their own tribe. People who care enough about the truth to risk their own status.
Evidence against interest... Hitchens, Russell..
https://jayhancock.substack.com/p/telling-truth-from-lies-with-one
This makes everyone sound so unmoored, like they’ve no conception of the good, of their own, or no ability to judge for themselves what advances it or injures it. Perhaps in that case it hardly matters what or whom they “believe”.
Great piece today! It's print and thumbtack on the office wall worthy. Thanks much.
"Pundits and aides vouched for President Biden’s cognitive health, until they didn’t."
All that vast intelligence and long, long, very long, education and they can't figure out their great leader has dementia. Virulent! Unfair! Those people clearly chorused Let's Go Brandon! Most amusing.
So I asked my browser AI "How should I decide whether to believe something?"
It produced:
"Deciding What to Believe
When deciding whether to believe something, consider the following factors:
Evidence: Is there credible, verifiable evidence to support the claim? Look for multiple sources, including primary sources whenever possible.
Consistency: Does the claim align with established facts, scientific consensus, and logical reasoning?
Expert Opinion: Are experts in the relevant field endorsing or rejecting the claim? Consider their credentials and areas of expertise.
Motivation: Are there potential biases or motivations behind the claim, such as financial gain, political agenda, or personal interest?
Context: Consider the context in which the claim is being made. Is it part of a larger narrative or agenda?
Internal Consistency: Does the claim make logical sense within itself? Are there any contradictions or inconsistencies?
Peer Review: Has the claim been subject to rigorous peer review and scrutiny by experts in the field?
Practical Steps
Take your time: Don’t rush to believe or dismiss something without careful consideration.
Verify information: Cross-check claims with multiple sources, including primary sources whenever possible.
Evaluate sources: Assess the credibility and reliability of sources, considering factors like expertise, bias, and motivation.
Consider alternative explanations: Look for alternative explanations or hypotheses that might better explain the phenomenon or data.
Be open-minded: Remain open to new information and perspectives, but also be cautious of confirmation bias.
Reflect on your own biases: Recognize your own biases and strive to set them aside when evaluating information."
Not too bad. I especially like "Take your time: Don’t rush to believe or dismiss something without careful consideration." How often do we really need to make such decisions?
And isn't the evidence always going to be more important than the source? Maintaining curiosity seems like a low-risk strategy.
Pondering such questions, I find the "ethics of belief" entry in the Stanford encyclopedia of philosophy particularly engrossing: https://plato.stanford.edu/entries/ethics-belief/
How about: "Facts are not observed; they are established."
An excellent post!
re/ #5: Whenever I’m in a discussion, I ask others and myself, “What would it take to change your mind?” If a person flounders here, it’s worth exploring whether we’re going to have a fruitful discussion. Or are we exchanging religious views.
I believe an important mode is that of trusting statements that follow from certain frameworks of logical thought that one has independently verified. In particular, by studying probability, Bayesian logic, some formal logic/rhetoric, and game theory, I have verified the foundational statements and proofs. I try to evaluate statements -- by whomever -- based on how they follow these frameworks and reflect the constraints these impose. In some cases, people I would distrust based on past performance will on occasion use such a framework even if only that it helps them in their specific case -- in which case, hard as it is, I may agree . Example 1 : Krugman, though he is a master at taking a desired conclusion, backing it up through the logic tree to some more innocuous sounding premises, then pretends to reason forward until 'et voila'). Example 2: Scott Alexander, I generally trust -- he generally is applying a Bayesian framework. But I can analyse the arguments and often end up disagreeing with him, but the disagreement can be traced to axiomatic inputs. So, "what to believe" is not always "who to believe".
You could go meta and say that this just means trusting "some dead guys" on process or it is covered by a subset of the above rules you list, or, ok, after long-winded analysis it just means I then get to decide whose "axioms" to trust. Yet, I think it is a deeper than that -- in that there is some degree of independent verification of truth possible that can be a significant mode of looking at the world (the other heuristics (?) above needed for functioning due to limited CPU, no question). Yes, I cannot take Fermat's theorem as true based on independent verification, but most arguments on the politics/social_issues seldom get beyond requiring basic Type I/Type II analysis, sample error bounds, symmetry arguments, Arrow's theorems or knowledge of Bayesian imposed constraints, all of which can be taught at high school level. And the "axioms" are also usually quite simple, and can be yes/no'd based on actual experience (lot of the trans claims and education research reasoning).
Likely being an engineer/self selected circle makes me biased, but I do not think making decisions based on a framework of reasoning from first principles is that rare.