23 Comments

But again, we get back to trust. It's really hard to not strengthen your prior when you catch those who disagree lying to you, or misrepresenting the facts, and this is because in daily life you will run into a large number of people who will lie to you precisely because they are wrong. "Ah, I shouldn't strengthen my belief because those liars have always been lying, so I have learned nothing" may be the correct approach -- but I don't know anybody who does this. It's only when you think those who disagree with you are arguing in good faith, and have just made _mistakes_ (which you can point out to them, and if you are correct they will acknowledge) that you can decide that nothing more was learned here. Otherwise 'I have evidence that the people who disagree with me do not care about the truth. I do. Therefore I am more likely to be correct than they are' is really hard to resist.

Expand full comment

That's a good point, and one that is awkward to set up in a Bayesean framework: beliefs are contingent on other beliefs, and they all get updated at the same time. So for a two belief system, imagine group A tells me that global warming is a big deal because of their model evidence, and then also tell me that polar bears are dying out because of global warming. If it then comes to light that they were lying or mistaken about the polar bear thing, that is going to also lower my probability that they are correct/honest about the model evidence. So the more you think a person or group is lying about certain things, the more you think they are lying about everything, so seemingly unrelated evidence will make you downgrade the value of other evidence you had previously accepted, thus making you update negatively on whatever probabilities were adjusted by that related evidence.

Expand full comment

In fact, come to think of it, there might not actually be an equilibrium where you stop updating your priors. If say you you update all your beliefs based how new evidence around one belief affects the probability that other evidence is strong, it is entirely possible you'd never quite stop. I suspect you would, but it might take a long time from a single bit of evidence, and getting more than one bit of evidence a day might keep you really busy.

I suppose that is probably a good model for why so many people don't update beliefs much, or think through the implications of new evidence; just stopping at an arbitrary point has the strong advantage that you don't think yourself into a hole.

Expand full comment

This is one reason why I think that focusing on 'internet misinformation' -- saying that the falseness is a property of the information -- is precisely the wrong approach. I can live very well with people who make mistakes and get things wrong from time to time. (Good thing, too, because that is all of us.) I can even live with fools, and all those who seemingly missed the meeting when common sense was being handed out. You make allowances. But liars need to be identified so we can all stop trusting what they say. Of course, since government is a prolific source of lies, they will never want to go along with this.

Expand full comment

Exactly. There is also the problem that people always want to give wide allowances to their allies and friends, and get really touchy when others point out that those allies and friends are being really sloppy or misleading in their claims, or pointing out that they were wrong about specific things. We are the best at lying to ourselves, but our friends are a close second, especially "allies" we don't actually know personally. Pushing back on our own in-group is hard and unpopular, but equally or even more important than pushing back on the out-group. I think the American left especially fell prey to that the past 10 years, and of course historically has adopted the strategy of "no enemies to the left" which... well Orwell's Homage to Catalonia describes how well that works out.

Expand full comment

But in practice the people with models do not talk about polar bears.

Now I listen to Pielke and nudge down my estimate of the optimal tax on net CO2 emissions, but that is also falling as batteries improve.

Expand full comment

This seems more like non-information. A speech by Greta is just a zero. There is no reason to think she is lying (but even her knowing your estimate of the seriousness of ACC is correct but exaggerating should still not make you reduce your estimate of seriousness.

Expand full comment

I think Scott's definition of bias is fine, and works well from a statistical viewpoint. But rather than redefining bias, we can avoid Arnold's wordsmithing by having our goal be to maximize the accuracy of our predictions rather than minimizing our bias. Introducing some bias through an ideology can increase our accuracy because it reduces our variance, e.g. our tendency to overfit our mental model to whatever events we happened to observe by chance. The bias-variance tradeoff from statistics tells us this is a good idea provided our prior does not introduce so much bias that it swamps the benefit obtained from the reduction in variance. I'll expand on this below.

In statistics/machine learning/predictive modeling, there is a very famous "bias-variance tradeoff." Models allow us to use data x to predict an outcome y by using a function: yhat = f(x). We learn the function f by examining past pairs [(x1,y1),...,(xn,yn)]=[X,Y], i.e. the training data. The bias of our model is the expected error. The bias-variance tradeoff says that if we want to reduce the mean squared error of our predictions, we have two choices - either reduce the function's bias or reduce its variance. The value of machine learning is that it allows us to fit very complex models that are unbiased (in the statistical sense), such as deep learning models with billions of parameter. The model is less biased than, say, a linear regression, because it is flexible enough to fit all sorts of complex patterns and so on average it should be correct. But because the function is so complex, the variance could be so high that it is worthless. So in machine learning, we look for ways to reduce the variance through various regularization methods. In Bayesian statistics, we use an informative prior for regularization.

The bias and variance of the model are viewing the training data, [X,Y], as the random variable. When we say on average the model should be correct, we mean that if we train our model on a random sample, we have no reason to expect the model's predictions should be too high vs too low. Some samples would cause our prediction to be too high whereas others would cause it to be too low. A model with high variance will make wildly different predictions depending on what training data is happens to use. But the key point is that the training data is random, and we want to design a method that allows to make a good prediction taking this uncertainty into account.

Bringing this back to the real world, the training data in our minds is our life experience. When we encounter a data point, like the number of heating-degree days in New York being 10% below normal, we can use this to make a prediction about global warming by filtering it through our mental model. Of course our data point didn't have to be this. We also could have learned the same statistic in Seattle or the same statistic from a different point in time. If we come into this situation with a naive unbiased blank slate model, we would overfit to that datapoint. And so depending on what the datapoint we observe is, we could draw wildly different conclusions. In other words, our model has too high variance. On the other hand, we can come in with a strong prior based on, say, our scientific knowledge. Our model is now biased, because whichever datapoint we happen to observe will not affect our conclusion as much. But this also means it has lower variance. And we can see the benefit of that - we will not be fooled by noise into reaching incorrect conclusions.

Expand full comment

In 2001, I was more concerned with 9/11 than global warming (as it was called). However, I took note of the IPCC alarmist computer models, 7 of which were published with their predictions of global temps increasing at an alarming rate. A key issue was how CO2 and other greenhouse gases affect the atmospheric temp, as well as what are the % of each gas.

CO2 going up from some 250 to 400 ppm is a fact that I accept. Most alarmists don't have good ideas of how much of the air is composed of CO2, nor that CO2 is making plants grow faster.

The power of science is the predictions - which come true. Yet, as I look at the global warming hysteria being renamed climate change, somewhat luckily consistent with the UN IPCC, all can see that the temperatures predicted by the models are NOT accurate, nor even very close. And of course, no climate scientists have good theories on why there were Ice Ages or why those ages ended (with global warming!). Models of the future that can't explain the past seem particularly untrustworthy to me.

Scientific theories can only be proven false, thru making predictions which fail -- they can never be proven true. (Including "I think, therefore I am").

Is the globe getting warmer? Is it a crisis, should warming be a top worry?

Why are the climate alarmists so wrong in their predictions?

Arnold is correct that most folk choose what to believe based mostly on WHO to believe. But I'm certainly hesitant to believe people, or orgs, who have lied in the recent past; as well as orgs who refuse to admit mistakes when they say some untruth, then the truth becomes known. (Maybe they made a mistake, maybe they deliberately lied).

CO2 climate is NOT a crisis if the alarmists are also opposing nuclear power -- the minimum CO2 energy source (over lifetime, including mining).

It's not a crisis if the "climate conferences" are in person, wasting CO2, rather than by some virtual means (Zoom, Skype, WebEx...).

Since the actions of the alarmists contradict their words, I look for other motivations they have -- including their own biases or priors, like an anti-capitalism, or anti-American, or anti-modernism bias.

Recently I read that pollution control has hugely reduced some ship pollution (sulphur-something), which was causing the formation of high altitude clouds, those most active in reflecting sunlight before it reaches Earth, thus cooling. With sea surface temperatures now more measurably increasing.

It's certainly possible that pollution was cooling the Earth as CO2 was warming it, so there was a balance. Which might now be rapidly changing -- not so much because of more CO2, but because of much less pollution.

So I'm biased against alarmism, and my prior is to Not Worry (worry level 3 of 10), but this new factoid (not yet verified) makes me more likely to worry more (4 of 10? or maybe 3.5?).

It slightly increases my proposal that all govt parking lots should be covered with solar panels, with clear transparent costs of installation, operation, electricity generation, maintenance costs, and end of life disposal costs.

But confirmation bias seems actually more prevalent in Not Seeing facts & analysis which argue against your current opinion. This is different than increasing the strength of your opinion based on new facts which others claim show your opinion is less correct.

Note that water vapor, clouds, is the single biggest factor involved in global warming, and we don't have good models about cloud cover. Some sites admit these facts about water vapor, including one from the UN, most avoid mentioning it, rather spending most time on CO2.

I don't believe the sites that don't talk about water vapor and the huge uncertainty.

Am I biased against proven liars? Most liars often tell the truth, but I don't have the time to check.

Expand full comment

The question is not how much to worry. It is how much cost should we be willing to incur today to avoid predicted future harms. That is something that should change +/- with new information, most of which is coming in negative, mainly because the costs of avoiding CO2 emissions are falling so rapidly.

It's like the "How serious is COVID?" Wrong question. Right question: what are the costs and benefits of closing the XYZ school?

Expand full comment

Yes. If it’s true that prior pollution was helping to cool, and less pollution means faster warming, we should be willing to spend more to reduce expected future warming harm, like more, and wider, fire breaks in Texas or CA, to reduce the costs of wildfires.

Similarly, as solar costs less, there should be more solar panels made-tho including end of useful life care.

Expand full comment

I've started becoming biased against people who refer to Bayesian reasoning.

Expand full comment

That's sad -- almost everybody interested in finding out the truth will refer to Bayesian reasoning.

Tho, because of this, many liars and manipulators will refer to Bayesian reasoning, (or rationalism, or effective altruism; or anti-racism, or anti-sexism, or anti-capitalism ...)

(Or, maybe this was just a joke and I'm being too serious!)

Expand full comment

I was mostly joking, but I do find it annoying when people use a highly specific mathematical concept do describe themselves but don't actually provide any of the inputs and probability weightings. It comes across as trying to make one's thinking seem superior to others, when I suspect it's really just gut feeling. A hidden form of status seeking.

Expand full comment

Yea, that is often annoying.

I think you’d find about 7% of my comments include some specific guesstimate about a known unknown truth, which is maybe 600% more than the average, with the mean being even less.

Expand full comment

Let's say you had a prior in 1859 surrounding slavery. The Harper's Ferry Raid happens. Both sides seem to have concluded that it strengthened their priors. The abolitionists believed that it showed the brutality of slavery and the blacks desire to rise up. The slavers concluded that the abolitionists did indeed want to take their slaves by violence if necessary, and of course what was done at Harper's Ferry was necessary to prevent that.

In a way the two sides just talk past each other because the baseline assumptions about the nature of slavery are different.

If you have certain priors about Israel/Palestine then Oct 7th can easily slot into them.

What the Palestinians did is some version of Harper's Ferry or the Haitian slave uprising (which genocided all the whites), and the brutality just shows the desperation of an oppressed people. I think references to violent de-colonization movements that succeeded are made.

Or you can take my race realist prior that the Palestinians are doomed to have a failed state via genetics and history and lash out at the Jews whom they wrongly blame for those factors. Oct 7th then slots in much the same way the George Floyd riots did in America just infinitely worse.

But it's not clear that Oct 7th on its own drives a particular worldview. It can only be interpreted in light of underlying assumptions about why it happened.

Expand full comment

Consider a corrupt judge. The judge could be bribed by someone else, or have his own motives to want an outcome different from that which the law requires, but either way, the cart is put before the horse, and the result comes first and the """explanation""" rationalizing it as best as possible from the available facts comes second. This is bias in the form of having a "results- oriented interpretive framework". The judge can update all his former priors on who did what when to be perfectly accurate, but it doesn't matter, because he's not updating the result, which is inflexible, and only arranging a case for that interpretation from a space of socially acceptable arguments which is very flexible.

People-my-team-like-are-always-right-no-matter-what bias works just like this. It's not that can't update factual priors just fine. It's that they are going to refuse to update the conviction that however bad those facts might happen to seem, the proper interpretation of them must always be fully exonerative as actions which were entirely justified by the circumstances.

Expand full comment

The devil is in the conditional priors here. Priors don't just include your probabilities for hypotheses by themselves, they include probabilities for how likely one hypothesis is given another.

So it's perfectly consistent with Bayesian methods to increase your probability that Palestinians can live at peace with Israel, *if* one followed some tortuous pattern of reasoning like "October 7 is evidence that Palestinians are even more badly treated than I thought, which means that once they achieve better conditions they will be content." Or something of that sort.

Of course no reasonable person would have that sort of conditional prior. So perhaps we should say that confirmation bias is a matter of having unreasonable conditional priors.

Expand full comment

The most common symptom of bladder cancer is blood in urine. It is also true that people with blood in urine almost always don't have cancer. Doesn't mean it shouldn't be investigated. Mayo says it should. Note we do colonoscopies with no symptoms and the procedure has significant risks.

FWIW, I've had blood in urine and did nothing.

Expand full comment

POLITICAL PRECEPT.

All that thou doest is right; but, friend, don't carry this precept

On too far, - be content, all that is right to effect.

It is enough to true zeal, if what is existing be perfect;

False zeal always would find finished perfection at once.

- Friedrich Schiller

Expand full comment

In the doctor case do we know that it was his strong belief that the test indicated serious condition or his (possibly flawed) cost benefit analysis of additional tests. For example, he does not take account of your time and discomfort nor the cost of the test.

Expand full comment

While there might be a theoretical difference, in practice, most commonly we see a lawyerly treatment of evidence: first we collect all possible evidence and accept and promote that which conforms to our agenda and then we critique all that does not conform, finding ways to bend it to our agenda.

In US history one example stands out for both its brazenness and its negative consequences, and that is James Madison's investigation into the history of confederacy form of government. See: https://founders.archives.gov/documents/Madison/01-09-02-0001

Whilst seditiously scheming to overthrow the Articles of Confederation, Madison engaged Jefferson to purchase historical works on the governance of confederations. Jefferson dutifully sent him two trunks of books. Madison's compiled his observations on these books into a short Notes on Ancient and Modern Confederacies: a list of confederations with an outline of their organization, a section on their federal authority, and a section for each entitled "Vices of the Constitution." The motivated reasoning could not be more brazen and so thus we read of the Swiss Confederation:

"Vices of the Constitution

1. disparity in size of Cantons

2. different principles of Governmt. in difft. Cantons

3. intolerance in Religion

4. weakness of the Union. The Common bailages wch. served as a Cement, sometimes become occasions of quarrels. Dictre. de Suisse.

In a treaty in 1683, with Victor Amadaeus of Savoy, it is stipulated, that he shall interpose as Mediator in disputes between the Cantons, and if necessary use force agst. the party refusing to submit to the sentence. Dictre. de Suisse—a striking proof of the want of authority in the whole over its parts."

So, material for lobbying for a strong central government was the only use Madison had for any of these books. We might thus aspire to reason and fair treatment of evidence, but motivated reasoning has been the American Way for a long time.

Expand full comment

Great post! Thank you. I should review Bayesian probability. I forget the fundamentals on that.

Expand full comment