55 Comments
Mar 28·edited Mar 28

The other problem is when people think that something that is objectively probable but get the calculation completely wrong. Most often they confuse their ignorance with the odds. Consider: You're a student resident at university who likes to come home for weekends. But your parents are divorced, and while each of them lives one hour's train ride away from the university, one lives north and one lives south of town. You decide that it would be cold blooded to make a schedule alternating visits, but also find it hard to decide between them. You conclude that what you should do is let fate decide -- you will head to the train station and catch whatever train arrives first, north or south. Since your schedule is completely chaotic, and you have no idea when you will ever arrive at the train station, you will get a 50/50 probability of seeing Mom or Dad. You implement this strategy. You discover you are seeing Dad a lot and your mother hardly at all. What went wrong?

What went wrong was your concluding that because your ignorance was total about what time you would arrive at the train station, that would imply a 50/50 probability of getting either result. But the probability is not determined by your ignorance of it -- it's determined by the train schedule. And if trains run once an hour, on the hour going north and at 10 after the hour going south, you are going to be on the north bound train 5/6th of the time. A more complicated train schedule with more trains is going to be harder to calculate the odds for, but at no time can you substitute your ignorance of when you leave university on Fridays + your ignorance of train schedules for a 50/50 split. But many people who argue for things like election results are doing precisely that. They find some reason to calculate with great precision something, and think that they should be able to understand the odds better. But buying a wristwatch and suddenly having a much more precise way to estimate when you are going to leave university on Friday isn't going to get you to your mother's house more often. Nor will going to s psychiatrist to discover the hidden reasons you have for resenting your mother, subconsciously determining when you arrive at the train station. I think that a good bit of forecasting is all about things that don't matter but that we don't know don't matter, without getting into whether or not the reason is subjective.

If you read the lastest from ACX -- https://substack.com/inbox/post/142476535 -- probabilities for and against the lab leak hypothesis: https://substack.com/inbox/post/142476535 I think that you will find that a lot of the arguing is about the odds of things that aren't relevant, but alas we don't know which ones.

Expand full comment
founding

40% chance you’re right.

Expand full comment

Even "objective" probability suffers from the reference class problem - why is the next coin flip similar enough to some set of other flips to include in the aggregate, but other flips (by magicians, say) probably aren't?

My model is that all probability is subjective - it's about one's knowledge/prediction of the universe, not about the universe itself. Actual reality is 100% likely that anything which happens, happens, and 0% anything that doesn't. Intermediate numbers only come in when we don't know whether a thing happened/will-happen or not.

That said, I have no objection to common use of 'objective probability' as a descriptor for things that our brains classify as repeatable and similar enough that our subjective aggregate predictions (50% heads, but not saying which flips specifically) seem to work out well.

Expand full comment

You forgot risk neutral probability used in finance :)

Expand full comment

A man flips a coin ten times. It lands heads every time.

The statistician says "the probability that it will be heads on the next flip is 50%".

The gangster says, "that coin must be rigged."

Whose right?

Anything worth trying to predict is going to involve uncertainty. It's not something that is going to be done over and over and over again until you can just predict the future by measuring the past and there is no variance in the outcome.

Right now I'm trying to predict how other companies will price their products next year. It's a one off event with limited historical data to go off of. Is my opinion "subjective"? I guess by your definition. But my company pays me a lot of have that opinion because they think it is better than other peoples subjective opinions, and I could give you a lot of math and reasoning for my subjective opinion.

Expand full comment
founding

Much of the contention/confusion in public discourse might be avoided simply by specifying (as Arnold does here) the kind of probability one has in mind; for example, "subjective probability," "conceptual probability," "empirical probability." In the case of empirical probability, one should also specify the evidence (experimental, historical, etc).

Expand full comment

Probability is unavoidably subjective because is about how much do you know (conversely, how much do you ignore). It is the most exact representation of your opinion on something you do not have a complete opinion about. You should aspire to have the most acurate opinion about things, so to make the best possible probability attribution. But subjectivity is un avoidable. I throw a dice, the result is three and I look at it, I give probability 1 to the three, and zero to the rest. You give probability 1/6 to all the numbers. We are both right in our attribution. If you think in probability theory as the most developed system of knowledge representation, everything makes sense.

Expand full comment

"My position on the issue is that sometimes we use probability to mean objective probability. ... And sometimes we use probability to mean subjective probability ... As long as we are clear on which definition we are using, it’s all fine."

That last sentence could apply to so many academic/philosophical disagreements.

Expand full comment

The good news is that while single events may not be repeatable, events in general are, and we can keep track of subjective probability accuracy.

So if you say there’s a .0001% chance of Biden winning, and he wins — true, there’s no way of proving you were wrong. But score not just that one prediction, but all your predictions over time, like they do on Manifold. Now that’s much closer to the coin flip scenario (and anyone who gives Biden those odds will have a terrible score.)

Expand full comment

I more or less agree with AK, but I add that we are often kidding ourselves when we put numbers to what he calls "subjective probability".

The minds and our language have a rich toolkit for dealing with different flavours of uncertainty. We only need numbers when we come to make bets.

Expand full comment

One useful innovation for guaging one-off events is the betting market, e.g. Iowa Electronic Markets. For any prospective event, these markets pool or "crowd source" thousands of objective probabilities into a single consensus number. That number might or might not be correct, but at least it's market clearing.

Expand full comment

Isn't betting on sports subjective probability?

Expand full comment

You need more Bayesian thinking here. Priors matter. Frequentists rely on large numbers to discern patterns, but life very often presents is with unique events, and it takes a Bayesian mindset to deal with that. I've written more about it here: https://lancelotfinn.substack.com/p/the-grand-coherence-chapter-2-how

Expand full comment

Probability is best thought of as your own level of uncertainty about an actual event. Before the coin toss, your probability is 50% of heads as the next flip result. If you step on the flipped coin, it becomes a result, 100%. But you don’t know what the result is. If you bet on the flipped but unseen, unknown coin, you should use 50%, because that’s the best measure of your knowledge.

Decision analysis uses probability this way, with Bayes theorem the key step in updating your initial, subjective, prior probability, with new info. Everybody uses probability in every decision they make, estimating that what they do will very likely have the result they want, tho of course we all find our 100% estimates are occasionally wrong, like typos when typing.

AI is using tons of probability, and frequency of words together, to create a chat bot which answers in a way that a human probably would. And every month, the probability that a new commenter, like Guest User, is actually a bot, that probability is going up. As is the probability that I, or any, am using a bot to help write my comments. (I’m not, yet.)

AI will, as do humans, fail to accurately predict the future, but their accuracy in predictions will be slowly increasing and is likely already better than most humans at choosing stocks to invest in now.

Expand full comment

You’re not wrong. But in fact there is a way to resolve it. Sorta/mostly.

It’s the name of Bryan Caplan’s Substack!

Of course it doesn’t change anything for the single event, but betting on it repeatedly over a bunch of different events gives you a decent sense of whose subjective probability assessments are more accurate than others.

Expand full comment