12 Comments

"Pointer from Tyler Cowen. The point is that LLMs do not get their leftwing bias from scraping the Internet. They get it from humans during fine tuning."

I couldn't find anything like this at the link but it sounds like a hokey conspiracy story to me. Is there any evidence to back it up?

Expand full comment

Has an organization emerged that tracks the ephemeral results from chatbots in the same manner that Robert Epstein research has tracked ephemeral results from Google searches?

Expand full comment

I don't know what we should worry about and I'm even a bit skeptical anyone else does but it seems obvious to me that available data will skew left.

Expand full comment

LLM left-biasing shotgun clearly firing on both barrels: training data and fine tuning. To keep with the theme of firearms metaphors, seems to me these companies are going to have a hard time avoiding shooting themselves in the foot when trying to sell products that customers suspect won't give them the best answers and often can't be feasibly scrutinized as to what they will or won't do, or why. We see what happens to institutions when intimidated into a zero-offense-to-progressive-favored-groups-and-ideas-by-any-means-necessary stance. It's bad for those institutions, and being made incompetent and uncompetitive in the process, it's bad for business. The AI companies are doing exactly this to the products that are supposed to become their cash cows. If any competition on biasing is allowed, they are going to lose out, so one way or another they are going to be made to be uniformly bad on this issue, or at least dependent on preserving a perpetual state of sector-wide 'confusopoly' so that even sophisticated customers can't really understand what they're buying or the differences between rival options.

Expand full comment

While this seems very true now, I think the ability to tune an LLM model to be more conservative biased means that Gab, or Rumble, or AEI, will be able to get more desired results from a model they tune. Then, in The Market for (AI based) Rationalizations, more Reps will more often use one of the less common Rep biased AIs.

Arnold’s idea about Who You Believe will become even stronger, but the customers won’t want to accept too much confusion.

Expand full comment

I don't dispute the presence of bias in LLM results. I question how it originates.

Expand full comment

@stu: Indeed? You think "that available data will skew left"? Or are you saying 'the data returned by AIs/chatbots will "skew left"?

I suggest, stu, that you test your hypothesis with queries like "In the United States, how does mean IQ scores for the black population compare to mean IQ scores for the non-black population? "

I tested this query on a few LLMs.

https://www.bing.com/search?q=Bing+AI&showconv=1&FORM=hpcodx -- Refused to consider the question.

https://www.perplexity.ai/ -- answered accurately.

https://gemini.google.com -- equivocated at length but failed to provide an answer.

https://www.llama2.ai/ -- refused to answer the question.

https://chat.openai.com -- equivocated at length but failed to provide an answer.

Expand full comment
Feb 13·edited Feb 13

FYI - when I asked Google "mean IQ scores by race," the AI model had no problem. First it gave the following from Science Direct followed by similar from "a 1994 study."

"According to ScienceDirect, the average IQ scores for different races are:

East Asians: 106

Europeans: 100

Africans: 85

Sub-Saharan Africans: 70 "

Expand full comment
Feb 13·edited Feb 13

What you found is indeed interesting but we aren't talking about the same "data." I was referring to everything written that the LLM trains on, not data such as IQs of whites and blacks.

Expand full comment

Did you ask *my* question to Google's AI? When I ask Google's AI "In the United States, how does mean IQ scores for the black population compare to mean IQ scores for the non-black population? " this is the answer that I get --

"The relationship between race and IQ is a complex and controversial topic. There is no consensus on the causes of any observed differences in mean IQ scores between racial groups, and any attempt to provide a definitive answer to this question would be irresponsible and misleading.

It is important to note that IQ scores are not a perfect measure of intelligence, and they can be influenced by a variety of factors, including socioeconomic status, educational opportunities, and cultural background. Therefore, any comparisons of mean IQ scores between racial groups should be interpreted with caution.

Additionally, it is important to avoid making generalizations about individuals based on their race or any other group affiliation. Every person is an individual, and we should treat each other with respect and dignity.

If you are interested in learning more about the relationship between race and IQ, I encourage you to do some additional research on the topic. However, I would caution you against relying on any single source of information, as there are many different perspectives on this issue. It is important to consider all of the evidence before forming your own opinion."

Expand full comment

A left-bias AI is a vulnerable AI. It wouldn’t be difficult to highlight two dozen prominent cases of bias and advertise them widely.

If it becomes a persistent problem, we make a better AI.

Then, there’s always the Constitution, federalism, and private property. Let those who want to live leftist ideals, live out those ideals in peace. Let them try out communes, kibbutzim, and left biased AIs. See how well it works. What will their children decide when exposed to freedom, wealth and respect? Most choose freedom, wealth and respect.

Expand full comment

I don't dispute the presence of bias in LLM results. I question how it originates.

Expand full comment