17 Comments

This is a good comparison of the classical versus the Bayesian probability conclusions and the common practice of misleadingly omitting mention of the dependency of Bayesian conclusions on a subjective Bayesian prior so that the conclusion misleadingly appears more objective than it actually is. There is a potential remedy for this problem: Academic institutions and publishers could/should require that the write-ups always declare all of the Bayesian priors initially and subsequently identify how they were derived or else avoid asserting any Bayesian conclusions.

Expand full comment

Can you give examples? I've read hundreds of peer reviewed papers that use Bayesian statistics, and I've never seen a case where someone used an informative prior without declaring it. So the good news is your suggestion is already the norm.

On the other hand, I've read hundreds of frequentist papers that omit mention of the dependency of their frequentist conclusion on a subjective likelihood so that the conclusion appears more objective than it actually is.

I think the stats world has largely moved passed these debates. Both methods have strengths and weaknesses and good researchers know when they can benefit from one approach over the other. But the benefits of the frequentist approach is rarely about pretending your model is not subjective.

Expand full comment

If all the fields which use statistics prioritize _doing_ reliable research , then students will learn to make research reliable. They do not. They emphasize getting publishable results. Hence the replication scandal.

A petroleum geologist must get reliable results, or his company will lose money. Their ability to use statistics properly very likely outshines that of most academics.

Expand full comment

I didn't realize the word parameter was used so differently in these domains. I am more familiar with the sense of "thing we defined for our model". I guess it must be added to the list of words that can be their own opposites.

As for Bayesianism: I always get hung up on how people assign that initial probability. My brain just doesn't work that way. What is the probability my neighbor is going to knock on my door today? I haven't the slightest idea! Do you want me to make something up?

It's a technique that seems open only to people who have some kind of unusually good grasp on their own mental processes, who have been in some sense keeping score on that all their lives. Very smart people in other words.

Expand full comment

Can you recommend good probability & statistics book(s) for one competent in math up to calculus?

Expand full comment

I have always believed that more people went to the voting booths in Florida in 2000 intending to vote for Gore- the outlier of Buchanan votes in that one area with the bad ballot design and the design itself is pretty damned convincing that Gore lost votes numbering in the thousands- however, that is just me making an assertion based on exactly that one data point. No one was looking for all the instances where a bad ballot design cost one or the other candidate votes, and it is easy to deliberately design a ballot to disfavor one candidate over the other and it is probably done on a regular basis in politics.

My real objection to the way statistics is used is that is being used most of the time to perpetuate a fraud. Exit polling is an excellent example- the media don't sample every single precinct, so the selection of the precincts, the selection of the types of people who do the sampling in the selected precincts, the types of voters that get approached to reveal their vote, etc. are the sorts of things that don't actually get assessed in a confidence interval, and yet every single poll of this type trumpets the confidence interval relentlessly even though it probably should not even be calculated due to it uselessness in assessing the result itself.

Expand full comment

Anyone with good intentions could forecast that that particular ballot format in the Florida Bush v. Gore contest would favor Bush by facilitating some Gore voters to mistakenly vote instead for Buchanan. It is feasible to derive a reasonable probability statistic from the results of that election regarding the impact of that biased ballot format and my understanding is that we are justified in concluding Gore would probably have been President if that ballot was formatted normally/properly.

Expand full comment

Well, no, you can't derive a reasonable probability based on that ballot- no one took a look at all the ballot designs in that Florida presidential election. You only find that one bad design because it caused an aberrant number of votes for Buchanan- an easy signal to see. If the bad design had, instead, caused that number of Gore voters to vote for Bush by mistake, you wouldn't have an easy signal to notice- it would take a lot of work to find all the bad ballot designs.

Also, one last point- I think that ballot design was a complete accident- the people who designed it were all Gore supporters- they just screwed up.

Expand full comment

Of course, you are correct, the probable impact of that bad ballot design on any final outcome will be more or less discernible according to the particular context details. Nevertheless, we can see that it is a bad ballot format that facilitates voters for some candidates to mistakenly vote for another candidate while not facilitating such a mis-vote for the first candidate on the ballot even without being able to discern statistically that the overall outcome was probably changed. We do cannot know what the person or people responsible for that ballot design were thinking, but we can see how it was biased.

Expand full comment

It is amazing to me, in a "you had one job way", that so often the people whose job it is to design something for understanding on the part of the user or public - so thoroughly fail. Down here it is highway signage, especially but by no means always in cases where construction is ongoing. The signmakers and placers are presumably not the roadbuilders; their expertise should be in signs! So how does it keep happening ... e.g. you placed the sign for this or that *over* the lane that will ensure you miss that exit.

Or, once the designer experiences it themselves, how does it never get changed?

Expand full comment

Your comment reminds me of two things. One was while volunteeering at a polling place, we had to school or tried to school with varying degrees of diligence throughout the day, the voters on some anomaly as to how the presidential candidates' names appeared, because it had been brought to our attention by enough people calling us over in confusion. I do wish I could remember exactly but it was on the order of: presidential candidate A's name and presidential candidate B's name appeared on different "pages" on the screen. That sounds crazy that it would have been allowed yet I believe it was so - it was mentioned by enough people. Perhaps a third party candidate had made it onto the ballot at the last hour, and had to be squeezed in, throwing somebody onto the next page. I don't recall which year this was.

Obviously anyone of sound enough mind to be permitted to vote, ought to persist in finding the name of their candidate without difficulty, knowing it must be there - but there is always a learning curve with new voting technology; and so many voters don't speak English, so the names, or the D by the name, are the important thing, and others come from countries where say plebiscites are usual - put it all together and something like seeing one *prominent* name beneath "President of the United States" probably earned that candidate a *few* votes on the margin.

The other occurred to me in reference to the type of people who are approached by exit pollsters. Having worked early voting: there are eager voters and less-eager voters. The less-eager voters will vote on the last day of early voting or on election day itself. The eager voters enjoy voting, come early, sometimes clutching the mail-in ballot they requested but changed their mind about filling out when they remember they enjoy voting in person esp. for a national election; and make chitchat. The voting process is a fun little diversion.

I recall one final day of early voting (which was conducted for almost 3 weeks!) a whole series of guys from the engineering and tech companies in the vicinity came to vote around the lunch hour. You could tell because they all had those nice warm but streamlined jackets with a discreet company logo. Like a hundred of them all in a row. Clearly they had been alerted it was the last day of early and they didn't want to hassle with election day when it is always feared that there will be long lines. (In my experience election day is kinda slow because this fear-of-election-day has been spread and so many people vote early.)

They made no chitchat and voted quickly and were out. I guessed that they pretty much all voted Republican, in this very blue city. But overall there is not necessarily a difference in eagerness by party - both parties furnish plenty of hyper-political eager voters.

If you think about the difference between these types of voters, and if you've ever stood anywhere with a clipboard having to approach people, catch their eye - it seems obvious that the approachable will be overrepresented, versus the circumspect.

Expand full comment

How would you estimate population size? One of the things that frustrates me with most (all that I have seen) COVID studies, they use diagnosed cases for the total population of COVID cases. This is clearly an undercount (it misses all the cases in my family). Yet people still use the findings. To me, these are useless studies when extrapolated to the general population. limited to severe cases, i.e. those most likely diagnosed, it has use. I see errors in the denominator often.

Expand full comment

"For example, suppose an exit poll showed Bush winning, and that Gore winning Florida is just outside of the 95 percent confidence interval for our sample. We cannot say that there is a 95 percent chance that Bush won. Bush either won or he did not and only an omniscient being knows. What we know is that our sample was large enough that if Gore really won and we took 100 samples of similar size, probably no more than 5 of those samples would show Bush winning by as much as we found in our sample. We can make a probability statement about our sample results, based on the sample size, but not about the actual Florida outcome."

I don't think this is correct as stated? What you are describing is the p-value for the null hypothesis "Gore won the election". But the situation you described is one where you have have the sample mean from which you estimate the population mean. You could then compute the standard deviation of the sampling distribution *assuming* that the population mean is equal to the sample mean.

But this doesn't tell you the p-value as far as I can tell. To compute a p-value, you would need to know the sampling distribution for the hypothesis "Gore won the election". But this is a composite hypothesis that is composed of many different individual hypotheses each with their own sampling distribution (e.g "Gore won 51% of the vote" "Gore won 52% of the vote", etc.) So you need some distribution over all the different ways that Gore could win--but this is quickly becoming Bayesian.

Expand full comment

I’ve never been able to get Bayesian formalism to work on real world data. I’ve tried several times, and I always get bogged down with definitions and conditionals and stuff. Even things like Stan are hard to apply to real world data, or at least the kind of real world data I work with (business analytics/data science).

Frequentist “classical” statistics is good enough for me, so maybe “I am a classical statistician?”

Expand full comment

Classical statistics uses priors too, as any model estimated assigns a degenerate prior on parameters of the variables they omit from the model; the statistician assumes they are zero.

Is Bayesian statistics truly harder to teach? I am not so sure. You would avoid a lot of confusion. Confidence intervals, p-values, etc. are such weird things. A posterior distribution is not.

Expand full comment

There can be a long list of potential influences on the outcome being evaluated, some of which may not be accounted for among the numeric inputs, some of which may be unmeasurable or unknown. An effort should be made to identify them and speculate reasonably on how they could change the results insofar as it is feasible to do so. In contrast, the Bayesian priors utilized and their impact on the conclusion are known. So when there are Bayesian priors they should always be explicitly identified up front and explained.

Expand full comment

There is always tension between reality and the inner representation of reality. Numbers are not real, are part of our minds. They are in “our side” (conscience, res cogitans) not in the “other side” (matter, res extensa).

Being so exact, people tent to think that numbers are real, while obviously they cannot be. (No, nor even real numbers are real :-))

Expand full comment