The other problem is when people think that something that is objectively probable but get the calculation completely wrong. Most often they confuse their ignorance with the odds. Consider: You're a student resident at university who likes to come home for weekends. But your parents are divorced, and while each of them lives one hour's train ride away from the university, one lives north and one lives south of town. You decide that it would be cold blooded to make a schedule alternating visits, but also find it hard to decide between them. You conclude that what you should do is let fate decide -- you will head to the train station and catch whatever train arrives first, north or south. Since your schedule is completely chaotic, and you have no idea when you will ever arrive at the train station, you will get a 50/50 probability of seeing Mom or Dad. You implement this strategy. You discover you are seeing Dad a lot and your mother hardly at all. What went wrong?

What went wrong was your concluding that because your ignorance was total about what time you would arrive at the train station, that would imply a 50/50 probability of getting either result. But the probability is not determined by your ignorance of it -- it's determined by the train schedule. And if trains run once an hour, on the hour going north and at 10 after the hour going south, you are going to be on the north bound train 5/6th of the time. A more complicated train schedule with more trains is going to be harder to calculate the odds for, but at no time can you substitute your ignorance of when you leave university on Fridays + your ignorance of train schedules for a 50/50 split. But many people who argue for things like election results are doing precisely that. They find some reason to calculate with great precision something, and think that they should be able to understand the odds better. But buying a wristwatch and suddenly having a much more precise way to estimate when you are going to leave university on Friday isn't going to get you to your mother's house more often. Nor will going to s psychiatrist to discover the hidden reasons you have for resenting your mother, subconsciously determining when you arrive at the train station. I think that a good bit of forecasting is all about things that don't matter but that we don't know don't matter, without getting into whether or not the reason is subjective.

That post from ACX made it pretty clear that even attempting formal analysis of probabilities of unique events is highly dependent on the subjective definition of the probabilities of the precursor events, as well as being subject to all the usual problems of probabilities like distinguishing between dependent and independent events.

I didn't get that. If you arrive anytime from eleven minutes after the hour until one minute before the hour, you will go to Dad's. You never gave mom a 50/50 fair shake.

Mom is just happy there was ever a ten-minute window when she had a chance. Mom will never stop hoping; that is her tragedy.

Maybe you could even step over and get a haircut during that ten-minute window.

It's framing, the starting point for the probability. What you say is true but it could just as easily be reverse. If one doesn't know that, it is 50-50.

Take another example. A six sided die has two of one number and none of another. You don't know which is which but are given the chance to bet on either of those two numbers (and only those numbers). Are your chances zero, 1/6, or 2/6?

I'm going to need to consult with Marilyn Vos Savant or Martin Gardner before I take that confusing bet!

As to the framing, her point was that the student thought he had set up the parameters of the scheme so as to make it random. He was not guessing about the next day's weather.

Even "objective" probability suffers from the reference class problem - why is the next coin flip similar enough to some set of other flips to include in the aggregate, but other flips (by magicians, say) probably aren't?

My model is that all probability is subjective - it's about one's knowledge/prediction of the universe, not about the universe itself. Actual reality is 100% likely that anything which happens, happens, and 0% anything that doesn't. Intermediate numbers only come in when we don't know whether a thing happened/will-happen or not.

That said, I have no objection to common use of 'objective probability' as a descriptor for things that our brains classify as repeatable and similar enough that our subjective aggregate predictions (50% heads, but not saying which flips specifically) seem to work out well.

A pop-sci book I'm reading at, takes the view that the probabilities of the wavefunction in quantum physics, are not a description of reality, nor yet a description per Heisenberg of the "status of our knowledge" as though the facts are there and pointed to by the wavefunction but not yet fully known by us: but rather they are merely a description of "what expectations we ought to have about the outcomes of observations or measurements."

The curious thing about this view (by no means universal, I understand) is that "it happened" (we saw/measured it) is not thought to have greater claim to reality (definition unclear) than what did not.

A man flips a coin ten times. It lands heads every time.

The statistician says "the probability that it will be heads on the next flip is 50%".

The gangster says, "that coin must be rigged."

Whose right?

Anything worth trying to predict is going to involve uncertainty. It's not something that is going to be done over and over and over again until you can just predict the future by measuring the past and there is no variance in the outcome.

Right now I'm trying to predict how other companies will price their products next year. It's a one off event with limited historical data to go off of. Is my opinion "subjective"? I guess by your definition. But my company pays me a lot of have that opinion because they think it is better than other peoples subjective opinions, and I could give you a lot of math and reasoning for my subjective opinion.

Who's right depends on the circumstances. If I took a coin from my wallet, flipped it ten times and and got all heads, I would be very surprised by my lucky run, but would not conclude that the coin is rigged -- even if I was a gangster. If a random co-traveler on a night train offered me to play heads-or-tails with his coin, though, and I flipped it ten times to try it out and got all heads, I would reasonably be suspicious that he gave me a rigged coin. In other words, I would be less surprised by a co-traveler who offers to play heads-or-tails being a confidence man (event C), than I would be by flipping 10/10 with a fair coin. A Bayesian would formulate this as "your prior probability P(C) > 1/1024", but this is merely a rewording which adds no information. Being surprised is a subjective feeling in the same sense as any perception in subjective. "Probability" either denotes one's estimate of how surprised one would reasonably be by one event compared to another, or it denotes a technical term in one of the scientific theories [in the sense of Russo (2004)] that have been constructed to explain phainomena in this area; frequentist probability is one such theory. I specify "reasonably surprised" because of things like the birthday paradox, where common gut estimates fail; this is completely analogous to trompe-l'oeil and is resolved by careful observation and reasoning. Perhaps one could call the first meaning above (one's estimate of how surprised one would be) "subjective probability" and the technical terms "objective probabilities", but I think this is not helpful to understanding anything and merely confusing (as are nearly all uses of the subjective/objective distinction).

All scientific theories of probability necessarily abstract from reality, so it is as pointless to complain about the statistician who gives you the result of an "exercise" performed within the framework of frequentist theory as it is to tell a geometer demonstrating a proof from Euclid on a whiteboard that the proof does not work because his sharpie cannot draw infinitely thin lines. The geometer is right about the proof, the complainer is right about the geometer's sharpie, but his complaint is stupid. Of course if the geometer claimed that his sharpie could, indeed, draw infinitely thin lines, or that line thickness was irrelevant to the result of a geometrical calculation, he would be stupid too.

Regarding probabilities of one-off events such as the outcome of a given presidential election or corona originating from a lab leak or not, one can sensibly talk about how surprised one would be after the event compared to how surprised one would be after flipping coins or rolling dies. In other words, people have gut estimates of how surprised they would be by learning where corona really originated from, or who won the election in November. Whether these gut estimates are any good, and whether, if they happened to be good, this was because they were reasonable or because of a lucky cancellation of errors and ignorances, is another matter entirely. To use Laura's train example above, it would be reasonable for me to be as surprised by going to Dad the first time I tried the scheme as I would be by flipping heads on a fair coin, because of my ignorance of the train schedule, but if I found myself going to Dad 40 times out of 50 in a year of Friday visits, it would not be reasonable for me to be as surprised by this as by flipping 40 heads out of 50: while I may be ignorant of the train schedule, I know that it is fixed rather than shifting around at random, and it would be reasonable for me to consider the possibility that the train schedule might be throwing me off before concluding that I had a one-in-a-hundred-thousand "lucky" run. On the other hand, if I lived in a country where trains don't run on a schedule, I might consider the possibility that I subconsciously slow down if I see a Mom-bound train coming to the station as I am walking towards it, or that trains return by a different route (because trains don't generally accumulate in one place indefinitely). All these considerations fall under the heading of correspondence rules between scientific theories of probability and reality and cannot be formalized.

In reference to Laura's example above, subjective isn't a synonym for ignorant, or throwing darts at board. Your company isn't just paying you for your opinion. If they just wanted a random guess they could save a lot of money (I hope) by just throwing dice to make the price determination. In theory the research and analysis should make your opinion better than a completely random decision.

Much of the contention/confusion in public discourse might be avoided simply by specifying (as Arnold does here) the kind of probability one has in mind; for example, "subjective probability," "conceptual probability," "empirical probability." In the case of empirical probability, one should also specify the evidence (experimental, historical, etc).

Human intellectual history has a just a few occasions in which someone produces an incredible breakthrough by means of an insight realizing that things that appear different and which are called by different names are actually manifestations of the same underlying phenomenon.

On the other hand, there seem to be countless occasions in which otherwise very smart people get stuck for ages in arguing about the 'proper' definition of a single word which they are trying to use to describe different things, when most of the confusion could have been cleared up by just accepting the differences and agreeing on the linguistic convention to use different or modified words to name them.

"Frequentist" probability vs "prediction uncertainties aggregation" probability (i.e., betting odds) are just different things and should be called by different names.

Euclid used a different word to denote a geometrical point, σημειον, than the word Greek philosophers have been using to denote a point in their discussions, στιγμα, probably to avoid pulling in all the philosophical cruft that had accumulated around the latter. However, today we are not often confused about the meaning of the word "point", as the context makes it clear whether it is being used as a technical term of the scientific theory of geometry, or in an everyday sense. Perhaps the confusion around "probability" arises because the status of frequentism, Bayesianism etc. as scientific theories (models) rather than accurate descriptions of reality is much less clear in our minds than the status of plane geometry as the former rather than the latter.

I think the problem is words. Human instincts involving using (and abusing) language to argue with each other (and on the 'right' definitions of those words themselves) did not evolve to help with rational dialectical discourse useful for discovering objective truths, but to help win at playing various kinds of social games. One sees this especially in "the law" all the time, because the power to change the accepted meaning of the words in the "the rules" is real power on the same level as making or repealing rules altogether.

By some miracle humanity has occasionally been able to drag itself out of the entropic quicksand and harness these abilities and discipline their use by dumping some of the distorting psychological baggage, I think mainly by inventing new words or modifiers to make precise distinctions, or getting away from words entirely and using symbols and increasingly formalistic rules for their operation and manipulation, and in general becoming aware of and consciously attempting to avoid the typical human language problems. Think of the long history of transitioning from primitive instincts of "arrangements of words useful for 'persuasion' or at least getting other human beings to go along with what you want them to think and do," to "valid procedural stackings of formal logic applied to simplified and artificial concepts and axioms."

There is a kind of recurrent theme running through thousands of years of human intellectual history in which some more generalized version of, "Shut up and calculate" and "Nullius in verba" (i.e., "ditch or distrust the words whenever possible") was the only good 'answer' to getting unstuck from the mire of inherently hopeless human verbal argumentation.

Every time people start arguing about things in terms of words, it's like it opens the gates to hell and lets all the epistemically-distorting demons out to corrupt the quest for truth because language itself is just far too enabling of all that jockeying and social-game playing and the temptation to get drawn into those games is just instinctively compelling, especially for people with strong rhetorical skills.

Words allow word games, word games allow social games, and social games are epistemic contaminators, and like addictive drugs, an opportunity for pleasurable self-poisoning that an otherwise useful brain can't resist doing to itself.

Another guess of how this tendency was sometimes overcome could be that civilizations evolved institutions where the status game could be played more successfully by means of impressing or persuading one's reference social group by means other than words, for example, by accomplishment in formal symbolic manipulations, or by success in some material, real-world achievements.

On the other hand, there's no reason why institutions can't push in the opposite direction back into the abyss where people are incentivized to use words for pure, truth-eroding game playing. I hope we never invent one of these "social media platforms" that might create such a state of affairs.

I agree with almost everything you wrote above, except for the part about getting away from words using symbols etc. Broadly speaking, we can never get away from words. Reality is infinitely rich. In order to think and talk about it, finite beings such as we are must use symbols that refer to parts of it, and words are one common kind of communicable symbol (our brains, as well as animals', also use non-verbal symbols internally, but those are not directly communicable). There is nothing specific to words as distinct from other kinds of communicable symbols which singles them out as uniquely liable to damage by social games. Any symbols widely used for communication are liable to it, as the well known phenomenon of euphemism treadmill converting precise medical terms into common expletives demonstrates. Symbols used in restricted contexts are less liable to damage by virtue of isolation and restriction, not because of some special quality they possess in themselves. Degeneration of symbols is a moral problem, and moral problems can never be completely solved by technical means. The remedy for it is ultimately moral too: self-discipline and institutions which encourage and reward it while discouraging and punishing violators.

Very well said. Yes, restricted context and isolation is a better way to express the idea. I like the way you put it as a permanent moral imperative of every generation to fight against ineradicable degenerative tendencies. 正名 forever.

Thank you. I vacillated whether to refer to Confucius' Great Learning and the rectification of names in my previous comment.

One thing I want to add which seems important to me is that we are only able to use words productively by harnessing the motivational power of the very social games which (if not held in check) damage them. It is thus a double bind. And considering that we can only fight against the tendencies by using both words and said motivational power, it is a triple bind. It is a challenge worthy of the civilized man.

Probability is unavoidably subjective because is about how much do you know (conversely, how much do you ignore). It is the most exact representation of your opinion on something you do not have a complete opinion about. You should aspire to have the most acurate opinion about things, so to make the best possible probability attribution. But subjectivity is un avoidable. I throw a dice, the result is three and I look at it, I give probability 1 to the three, and zero to the rest. You give probability 1/6 to all the numbers. We are both right in our attribution. If you think in probability theory as the most developed system of knowledge representation, everything makes sense.

That was entertaining and easily digested. I wasn't able to view his video so I will guess he built a device for flipping coins that always delivers heads.

His view is that probability is a relationship in logic only.

"Probability is not real ... It doesn’t exist separate from the mind that entertains it."

I may be mistaken but it seems to me that in order to demonstrate that there is no reality to the probability of a coin landing heads or tails being 1/2, he built a device rather than just say "you never specified 'fair toss'"?

Whereas, the notion of a coin flip is precisely something useful because no one has to entertain any thoughts about it - there are none to entertain.

"My position on the issue is that sometimes we use probability to mean objective probability. ... And sometimes we use probability to mean subjective probability ... As long as we are clear on which definition we are using, it’s all fine."

That last sentence could apply to so many academic/philosophical disagreements.

The good news is that while single events may not be repeatable, events in general are, and we can keep track of subjective probability accuracy.

So if you say there’s a .0001% chance of Biden winning, and he wins — true, there’s no way of proving you were wrong. But score not just that one prediction, but all your predictions over time, like they do on Manifold. Now that’s much closer to the coin flip scenario (and anyone who gives Biden those odds will have a terrible score.)

You cook ten new recipes, predicting that there is an 90% chance each will be wonderful. The first is appalling, the rest good. The nine successes do not show that the first prediction was correct. Outcomes of independent non repeatable events…

They don’t show that “the first prediction is correct” — what they show is that you are perfectly calibrated at assigning probabilities to your dishes.

This is not just an academic discussion. Knowing if the probability is actually 90% could be important if you are a chef, starting a restaurant, etc.

One useful innovation for guaging one-off events is the betting market, e.g. Iowa Electronic Markets. For any prospective event, these markets pool or "crowd source" thousands of objective probabilities into a single consensus number. That number might or might not be correct, but at least it's market clearing.

You need more Bayesian thinking here. Priors matter. Frequentists rely on large numbers to discern patterns, but life very often presents is with unique events, and it takes a Bayesian mindset to deal with that. I've written more about it here: https://lancelotfinn.substack.com/p/the-grand-coherence-chapter-2-how

The empirical updating of Bayesian priors is only logically valid when collecting sufficient numbers of new observations to update the frequencies in the statistical distributions of ones patterns.

You can't predict without a good pattern, and you can't notice good patterns without lots of data. If you don't have patterns based on lots of data, what happens next is not just random but worse, because with an unknown distribution. You may not be able to predict which side the fair coin will land on next, but you can know the odds are 50/50, and know the odds of X many heads in the next Y flips. But if I hand you an unfair coin and with unknown internal weight distribution, then you not only can't tell the next flip, you can't say anything at all yet about the next hundred flips, until you start collecting lots of data to learn the frequencies making up the pattern.

Again, this is all really arguing about the right use of descriptive language, but here's an example. In multivariable mathematics you cannot do certain operations and product sensible results. You can't add 5 apples to 3 oranges and get 8 "appleoranges". You can make new dimensionless numbers like "apples + oranges" but that can't be expressed in terms of "apple" units.

Likewise, when one is mixing models based on frequentist patterns - for which it is appropriate to talk in terms of "probability" - with big question marks of unknown unknowns, then one shouldn't end up with a result that is also expressed in terms of "probability", which is promoting a model above its empirically justified rank.

Fine analysis yet with a failure to get to the core issue- how to make better decisions under uncertainty. All discussions and evaluations of decision making involve choices available and unknowns, both known unknowns and unknown unknowns.

Tho it’s certainly true that quantifying a guesstimate so as to combine with some frequentist stats and other guesstimates allows promotion of models far above their empirically justified ranks. Most experts often do so, often. Including areas they have little info about.

Probability is best thought of as your own level of uncertainty about an actual event. Before the coin toss, your probability is 50% of heads as the next flip result. If you step on the flipped coin, it becomes a result, 100%. But you don’t know what the result is. If you bet on the flipped but unseen, unknown coin, you should use 50%, because that’s the best measure of your knowledge.

Decision analysis uses probability this way, with Bayes theorem the key step in updating your initial, subjective, prior probability, with new info. Everybody uses probability in every decision they make, estimating that what they do will very likely have the result they want, tho of course we all find our 100% estimates are occasionally wrong, like typos when typing.

AI is using tons of probability, and frequency of words together, to create a chat bot which answers in a way that a human probably would. And every month, the probability that a new commenter, like Guest User, is actually a bot, that probability is going up. As is the probability that I, or any, am using a bot to help write my comments. (I’m not, yet.)

AI will, as do humans, fail to accurately predict the future, but their accuracy in predictions will be slowly increasing and is likely already better than most humans at choosing stocks to invest in now.

You’re not wrong. But in fact there is a way to resolve it. Sorta/mostly.

It’s the name of Bryan Caplan’s Substack!

Of course it doesn’t change anything for the single event, but betting on it repeatedly over a bunch of different events gives you a decent sense of whose subjective probability assessments are more accurate than others.

edited Mar 28The other problem is when people think that something that is objectively probable but get the calculation completely wrong. Most often they confuse their ignorance with the odds. Consider: You're a student resident at university who likes to come home for weekends. But your parents are divorced, and while each of them lives one hour's train ride away from the university, one lives north and one lives south of town. You decide that it would be cold blooded to make a schedule alternating visits, but also find it hard to decide between them. You conclude that what you should do is let fate decide -- you will head to the train station and catch whatever train arrives first, north or south. Since your schedule is completely chaotic, and you have no idea when you will ever arrive at the train station, you will get a 50/50 probability of seeing Mom or Dad. You implement this strategy. You discover you are seeing Dad a lot and your mother hardly at all. What went wrong?

What went wrong was your concluding that because your ignorance was total about what time you would arrive at the train station, that would imply a 50/50 probability of getting either result. But the probability is not determined by your ignorance of it -- it's determined by the train schedule. And if trains run once an hour, on the hour going north and at 10 after the hour going south, you are going to be on the north bound train 5/6th of the time. A more complicated train schedule with more trains is going to be harder to calculate the odds for, but at no time can you substitute your ignorance of when you leave university on Fridays + your ignorance of train schedules for a 50/50 split. But many people who argue for things like election results are doing precisely that. They find some reason to calculate with great precision something, and think that they should be able to understand the odds better. But buying a wristwatch and suddenly having a much more precise way to estimate when you are going to leave university on Friday isn't going to get you to your mother's house more often. Nor will going to s psychiatrist to discover the hidden reasons you have for resenting your mother, subconsciously determining when you arrive at the train station. I think that a good bit of forecasting is all about things that don't matter but that we don't know don't matter, without getting into whether or not the reason is subjective.

If you read the lastest from ACX -- https://substack.com/inbox/post/142476535 -- probabilities for and against the lab leak hypothesis: https://substack.com/inbox/post/142476535 I think that you will find that a lot of the arguing is about the odds of things that aren't relevant, but alas we don't know which ones.

I read this piece -- https://johnhorgan.org/cross-check/bayes-theorem-and-bullshit -- which is respectful of Bayes's insight and the math, but less so of people's choice of premise. Kind of gets at the same thing as your clever example?

That post from ACX made it pretty clear that even attempting formal analysis of probabilities of unique events is highly dependent on the subjective definition of the probabilities of the precursor events, as well as being subject to all the usual problems of probabilities like distinguishing between dependent and independent events.

And yet if there is only one trip, the reasoning is perfectly sound.

I don't get how the train example relates to election prediction.

I didn't get that. If you arrive anytime from eleven minutes after the hour until one minute before the hour, you will go to Dad's. You never gave mom a 50/50 fair shake.

Mom is just happy there was ever a ten-minute window when she had a chance. Mom will never stop hoping; that is her tragedy.

Maybe you could even step over and get a haircut during that ten-minute window.

It's framing, the starting point for the probability. What you say is true but it could just as easily be reverse. If one doesn't know that, it is 50-50.

Take another example. A six sided die has two of one number and none of another. You don't know which is which but are given the chance to bet on either of those two numbers (and only those numbers). Are your chances zero, 1/6, or 2/6?

edited Mar 28I'm going to need to consult with Marilyn Vos Savant or Martin Gardner before I take that confusing bet!

As to the framing, her point was that the student thought he had set up the parameters of the scheme so as to make it random. He was not guessing about the next day's weather.

Sorry I didn't state it clearly enough.

No, you didn’t. What is the actual bet? “On either of those two”

You stated it clearly, I just can’t wrap my head around it. I want to multiply but that pesky zero ruined that.

40% chance you’re right.

Even "objective" probability suffers from the reference class problem - why is the next coin flip similar enough to some set of other flips to include in the aggregate, but other flips (by magicians, say) probably aren't?

My model is that all probability is subjective - it's about one's knowledge/prediction of the universe, not about the universe itself. Actual reality is 100% likely that anything which happens, happens, and 0% anything that doesn't. Intermediate numbers only come in when we don't know whether a thing happened/will-happen or not.

That said, I have no objection to common use of 'objective probability' as a descriptor for things that our brains classify as repeatable and similar enough that our subjective aggregate predictions (50% heads, but not saying which flips specifically) seem to work out well.

"knowledge/prediction"

A pop-sci book I'm reading at, takes the view that the probabilities of the wavefunction in quantum physics, are not a description of reality, nor yet a description per Heisenberg of the "status of our knowledge" as though the facts are there and pointed to by the wavefunction but not yet fully known by us: but rather they are merely a description of "what expectations we ought to have about the outcomes of observations or measurements."

The curious thing about this view (by no means universal, I understand) is that "it happened" (we saw/measured it) is not thought to have greater claim to reality (definition unclear) than what did not.

You forgot risk neutral probability used in finance :)

A man flips a coin ten times. It lands heads every time.

The statistician says "the probability that it will be heads on the next flip is 50%".

The gangster says, "that coin must be rigged."

Whose right?

Anything worth trying to predict is going to involve uncertainty. It's not something that is going to be done over and over and over again until you can just predict the future by measuring the past and there is no variance in the outcome.

Right now I'm trying to predict how other companies will price their products next year. It's a one off event with limited historical data to go off of. Is my opinion "subjective"? I guess by your definition. But my company pays me a lot of have that opinion because they think it is better than other peoples subjective opinions, and I could give you a lot of math and reasoning for my subjective opinion.

edited Mar 29Who's right depends on the circumstances. If I took a coin from my wallet, flipped it ten times and and got all heads, I would be very surprised by my lucky run, but would not conclude that the coin is rigged -- even if I was a gangster. If a random co-traveler on a night train offered me to play heads-or-tails with his coin, though, and I flipped it ten times to try it out and got all heads, I would reasonably be suspicious that he gave me a rigged coin. In other words, I would be less surprised by a co-traveler who offers to play heads-or-tails being a confidence man (event C), than I would be by flipping 10/10 with a fair coin. A Bayesian would formulate this as "your prior probability P(C) > 1/1024", but this is merely a rewording which adds no information. Being surprised is a subjective feeling in the same sense as any perception in subjective. "Probability" either denotes one's estimate of how surprised one would reasonably be by one event compared to another, or it denotes a technical term in one of the scientific theories [in the sense of Russo (2004)] that have been constructed to explain phainomena in this area; frequentist probability is one such theory. I specify "reasonably surprised" because of things like the birthday paradox, where common gut estimates fail; this is completely analogous to trompe-l'oeil and is resolved by careful observation and reasoning. Perhaps one could call the first meaning above (one's estimate of how surprised one would be) "subjective probability" and the technical terms "objective probabilities", but I think this is not helpful to understanding anything and merely confusing (as are nearly all uses of the subjective/objective distinction).

All scientific theories of probability necessarily abstract from reality, so it is as pointless to complain about the statistician who gives you the result of an "exercise" performed within the framework of frequentist theory as it is to tell a geometer demonstrating a proof from Euclid on a whiteboard that the proof does not work because his sharpie cannot draw infinitely thin lines. The geometer is right about the proof, the complainer is right about the geometer's sharpie, but his complaint is stupid. Of course if the geometer claimed that his sharpie could, indeed, draw infinitely thin lines, or that line thickness was irrelevant to the result of a geometrical calculation, he would be stupid too.

Regarding probabilities of one-off events such as the outcome of a given presidential election or corona originating from a lab leak or not, one can sensibly talk about how surprised one would be after the event compared to how surprised one would be after flipping coins or rolling dies. In other words, people have gut estimates of how surprised they would be by learning where corona really originated from, or who won the election in November. Whether these gut estimates are any good, and whether, if they happened to be good, this was because they were reasonable or because of a lucky cancellation of errors and ignorances, is another matter entirely. To use Laura's train example above, it would be reasonable for me to be as surprised by going to Dad the first time I tried the scheme as I would be by flipping heads on a fair coin, because of my ignorance of the train schedule, but if I found myself going to Dad 40 times out of 50 in a year of Friday visits, it would not be reasonable for me to be as surprised by this as by flipping 40 heads out of 50: while I may be ignorant of the train schedule, I know that it is fixed rather than shifting around at random, and it would be reasonable for me to consider the possibility that the train schedule might be throwing me off before concluding that I had a one-in-a-hundred-thousand "lucky" run. On the other hand, if I lived in a country where trains don't run on a schedule, I might consider the possibility that I subconsciously slow down if I see a Mom-bound train coming to the station as I am walking towards it, or that trains return by a different route (because trains don't generally accumulate in one place indefinitely). All these considerations fall under the heading of correspondence rules between scientific theories of probability and reality and cannot be formalized.

In reference to Laura's example above, subjective isn't a synonym for ignorant, or throwing darts at board. Your company isn't just paying you for your opinion. If they just wanted a random guess they could save a lot of money (I hope) by just throwing dice to make the price determination. In theory the research and analysis should make your opinion better than a completely random decision.

"Anything worth trying to predict is going to involve uncertainty."

If there is no uncertainty, it isn't a prediction.

Much of the contention/confusion in public discourse might be avoided simply by specifying (as Arnold does here) the kind of probability one has in mind; for example, "subjective probability," "conceptual probability," "empirical probability." In the case of empirical probability, one should also specify the evidence (experimental, historical, etc).

Human intellectual history has a just a few occasions in which someone produces an incredible breakthrough by means of an insight realizing that things that appear different and which are called by different names are actually manifestations of the same underlying phenomenon.

On the other hand, there seem to be countless occasions in which otherwise very smart people get stuck for ages in arguing about the 'proper' definition of a single word which they are trying to use to describe different things, when most of the confusion could have been cleared up by just accepting the differences and agreeing on the linguistic convention to use different or modified words to name them.

"Frequentist" probability vs "prediction uncertainties aggregation" probability (i.e., betting odds) are just different things and should be called by different names.

Euclid used a different word to denote a geometrical point, σημειον, than the word Greek philosophers have been using to denote a point in their discussions, στιγμα, probably to avoid pulling in all the philosophical cruft that had accumulated around the latter. However, today we are not often confused about the meaning of the word "point", as the context makes it clear whether it is being used as a technical term of the scientific theory of geometry, or in an everyday sense. Perhaps the confusion around "probability" arises because the status of frequentism, Bayesianism etc. as scientific theories (models) rather than accurate descriptions of reality is much less clear in our minds than the status of plane geometry as the former rather than the latter.

I think the problem is words. Human instincts involving using (and abusing) language to argue with each other (and on the 'right' definitions of those words themselves) did not evolve to help with rational dialectical discourse useful for discovering objective truths, but to help win at playing various kinds of social games. One sees this especially in "the law" all the time, because the power to change the accepted meaning of the words in the "the rules" is real power on the same level as making or repealing rules altogether.

By some miracle humanity has occasionally been able to drag itself out of the entropic quicksand and harness these abilities and discipline their use by dumping some of the distorting psychological baggage, I think mainly by inventing new words or modifiers to make precise distinctions, or getting away from words entirely and using symbols and increasingly formalistic rules for their operation and manipulation, and in general becoming aware of and consciously attempting to avoid the typical human language problems. Think of the long history of transitioning from primitive instincts of "arrangements of words useful for 'persuasion' or at least getting other human beings to go along with what you want them to think and do," to "valid procedural stackings of formal logic applied to simplified and artificial concepts and axioms."

There is a kind of recurrent theme running through thousands of years of human intellectual history in which some more generalized version of, "Shut up and calculate" and "Nullius in verba" (i.e., "ditch or distrust the words whenever possible") was the only good 'answer' to getting unstuck from the mire of inherently hopeless human verbal argumentation.

Every time people start arguing about things in terms of words, it's like it opens the gates to hell and lets all the epistemically-distorting demons out to corrupt the quest for truth because language itself is just far too enabling of all that jockeying and social-game playing and the temptation to get drawn into those games is just instinctively compelling, especially for people with strong rhetorical skills.

Words allow word games, word games allow social games, and social games are epistemic contaminators, and like addictive drugs, an opportunity for pleasurable self-poisoning that an otherwise useful brain can't resist doing to itself.

Another guess of how this tendency was sometimes overcome could be that civilizations evolved institutions where the status game could be played more successfully by means of impressing or persuading one's reference social group by means other than words, for example, by accomplishment in formal symbolic manipulations, or by success in some material, real-world achievements.

On the other hand, there's no reason why institutions can't push in the opposite direction back into the abyss where people are incentivized to use words for pure, truth-eroding game playing. I hope we never invent one of these "social media platforms" that might create such a state of affairs.

I agree with almost everything you wrote above, except for the part about getting away from words using symbols etc. Broadly speaking, we can never get away from words. Reality is infinitely rich. In order to think and talk about it, finite beings such as we are must use symbols that refer to parts of it, and words are one common kind of communicable symbol (our brains, as well as animals', also use non-verbal symbols internally, but those are not directly communicable). There is nothing specific to words as distinct from other kinds of communicable symbols which singles them out as uniquely liable to damage by social games. Any symbols widely used for communication are liable to it, as the well known phenomenon of euphemism treadmill converting precise medical terms into common expletives demonstrates. Symbols used in restricted contexts are less liable to damage by virtue of isolation and restriction, not because of some special quality they possess in themselves. Degeneration of symbols is a moral problem, and moral problems can never be completely solved by technical means. The remedy for it is ultimately moral too: self-discipline and institutions which encourage and reward it while discouraging and punishing violators.

Very well said. Yes, restricted context and isolation is a better way to express the idea. I like the way you put it as a permanent moral imperative of every generation to fight against ineradicable degenerative tendencies. 正名 forever.

Thank you. I vacillated whether to refer to Confucius' Great Learning and the rectification of names in my previous comment.

One thing I want to add which seems important to me is that we are only able to use words productively by harnessing the motivational power of the very social games which (if not held in check) damage them. It is thus a double bind. And considering that we can only fight against the tendencies by using both words and said motivational power, it is a triple bind. It is a challenge worthy of the civilized man.

Probability is unavoidably subjective because is about how much do you know (conversely, how much do you ignore). It is the most exact representation of your opinion on something you do not have a complete opinion about. You should aspire to have the most acurate opinion about things, so to make the best possible probability attribution. But subjectivity is un avoidable. I throw a dice, the result is three and I look at it, I give probability 1 to the three, and zero to the rest. You give probability 1/6 to all the numbers. We are both right in our attribution. If you think in probability theory as the most developed system of knowledge representation, everything makes sense.

This seems relevant - https://wmbriggs.substack.com/p/probability-is-not-frequentist-nor

That was entertaining and easily digested. I wasn't able to view his video so I will guess he built a device for flipping coins that always delivers heads.

His view is that probability is a relationship in logic only.

"Probability is not real ... It doesn’t exist separate from the mind that entertains it."

I may be mistaken but it seems to me that in order to demonstrate that there is no reality to the probability of a coin landing heads or tails being 1/2, he built a device rather than just say "you never specified 'fair toss'"?

Whereas, the notion of a coin flip is precisely something useful because no one has to entertain any thoughts about it - there are none to entertain.

"My position on the issue is that sometimes we use probability to mean objective probability. ... And sometimes we use probability to mean subjective probability ... As long as we are clear on which definition we are using, it’s all fine."

That last sentence could apply to so many academic/philosophical disagreements.

The good news is that while single events may not be repeatable, events in general are, and we can keep track of subjective probability accuracy.

So if you say there’s a .0001% chance of Biden winning, and he wins — true, there’s no way of proving you were wrong. But score not just that one prediction, but all your predictions over time, like they do on Manifold. Now that’s much closer to the coin flip scenario (and anyone who gives Biden those odds will have a terrible score.)

You cook ten new recipes, predicting that there is an 90% chance each will be wonderful. The first is appalling, the rest good. The nine successes do not show that the first prediction was correct. Outcomes of independent non repeatable events…

They don’t show that “the first prediction is correct” — what they show is that you are perfectly calibrated at assigning probabilities to your dishes.

This is not just an academic discussion. Knowing if the probability is actually 90% could be important if you are a chef, starting a restaurant, etc.

I more or less agree with AK, but I add that we are often kidding ourselves when we put numbers to what he calls "subjective probability".

The minds and our language have a rich toolkit for dealing with different flavours of uncertainty. We only need numbers when we come to make bets.

One useful innovation for guaging one-off events is the betting market, e.g. Iowa Electronic Markets. For any prospective event, these markets pool or "crowd source" thousands of objective probabilities into a single consensus number. That number might or might not be correct, but at least it's market clearing.

Isn't betting on sports subjective probability?

You need more Bayesian thinking here. Priors matter. Frequentists rely on large numbers to discern patterns, but life very often presents is with unique events, and it takes a Bayesian mindset to deal with that. I've written more about it here: https://lancelotfinn.substack.com/p/the-grand-coherence-chapter-2-how

The empirical updating of Bayesian priors is only logically valid when collecting sufficient numbers of new observations to update the frequencies in the statistical distributions of ones patterns.

You can't predict without a good pattern, and you can't notice good patterns without lots of data. If you don't have patterns based on lots of data, what happens next is not just random but worse, because with an unknown distribution. You may not be able to predict which side the fair coin will land on next, but you can know the odds are 50/50, and know the odds of X many heads in the next Y flips. But if I hand you an unfair coin and with unknown internal weight distribution, then you not only can't tell the next flip, you can't say anything at all yet about the next hundred flips, until you start collecting lots of data to learn the frequencies making up the pattern.

Again, this is all really arguing about the right use of descriptive language, but here's an example. In multivariable mathematics you cannot do certain operations and product sensible results. You can't add 5 apples to 3 oranges and get 8 "appleoranges". You can make new dimensionless numbers like "apples + oranges" but that can't be expressed in terms of "apple" units.

Likewise, when one is mixing models based on frequentist patterns - for which it is appropriate to talk in terms of "probability" - with big question marks of unknown unknowns, then one shouldn't end up with a result that is also expressed in terms of "probability", which is promoting a model above its empirically justified rank.

Fine analysis yet with a failure to get to the core issue- how to make better decisions under uncertainty. All discussions and evaluations of decision making involve choices available and unknowns, both known unknowns and unknown unknowns.

Tho it’s certainly true that quantifying a guesstimate so as to combine with some frequentist stats and other guesstimates allows promotion of models far above their empirically justified ranks. Most experts often do so, often. Including areas they have little info about.

Probability is best thought of as your own level of uncertainty about an actual event. Before the coin toss, your probability is 50% of heads as the next flip result. If you step on the flipped coin, it becomes a result, 100%. But you don’t know what the result is. If you bet on the flipped but unseen, unknown coin, you should use 50%, because that’s the best measure of your knowledge.

Decision analysis uses probability this way, with Bayes theorem the key step in updating your initial, subjective, prior probability, with new info. Everybody uses probability in every decision they make, estimating that what they do will very likely have the result they want, tho of course we all find our 100% estimates are occasionally wrong, like typos when typing.

AI is using tons of probability, and frequency of words together, to create a chat bot which answers in a way that a human probably would. And every month, the probability that a new commenter, like Guest User, is actually a bot, that probability is going up. As is the probability that I, or any, am using a bot to help write my comments. (I’m not, yet.)

AI will, as do humans, fail to accurately predict the future, but their accuracy in predictions will be slowly increasing and is likely already better than most humans at choosing stocks to invest in now.

You’re not wrong. But in fact there is a way to resolve it. Sorta/mostly.

It’s the name of Bryan Caplan’s Substack!

Of course it doesn’t change anything for the single event, but betting on it repeatedly over a bunch of different events gives you a decent sense of whose subjective probability assessments are more accurate than others.