LLM links
Brian Chau on training Chatbots to be Woke; Scott Alexander on AI as brain tissue; an LLM does customer service; A humanoid robot, go Figure
government action through the Biden executive order likely contributed to Gemini’s far-left ideology. To my knowledge, this is the first documented case of the direct use of government to change the political viewpoints of a machine learning model. Second is that the Biden administration’s actions through the EO may amount to state-sanctioned racism, unconstitutional under the fourtheenth amendment. Ultimately, the exact extent of the Biden administration’s involvement may require legal or congressional measures to reveal.
The way I think about it, the chatbots have two data sources. One is true data, that is naturally found “in the wild,” so to speak. The other is fake data, supplied by the model builders. Suppose Google gave Gemini a whole lot of images depicting people of color, many more than would be found ordinarily on the Internet. Then when it is asked to produce an image with people in it, Gemini will include many people of color.
It is unfortunate that large language models have emerged at just the point in history where there is a cohort of people for whom social justice as they define it has become the overwhelming priority.
In an interview with Sotonye, Scott Alexander responds,
According to the predictive coding (https://slatestarcodex.com/2017/09/05/book-review-surfing-uncertainty/) model, the brain is trying to do what LLMs do - we predict (https://slatestarcodex.com/2019/02/28/meaningful/) the next piece of sense-data in the same way it predicts the next token. It's using a structure a lot like LLMs - a neural network, although of course this is a vague term and there are lots of specific differences. I think they're basically the same type of thing doing the same kind of work.
– Klarna today announced its AI assistant powered by OpenAI. Now live globally for 1 month, the numbers speak for themselves:
The AI assistant has had 2.3 million conversations, two-thirds of Klarna’s customer service chats
It is doing the equivalent work of 700 full-time agents
It is on par with human agents in regard to customer satisfaction score
It is more accurate in errand resolution, leading to a 25% drop in repeat inquiries
Customers now resolve their errands in less than 2 mins compared to 11 mins previously
It’s available in 23 markets, 24/7 and communicates in more than 35 languages
It’s estimated to drive a $40 million USD in profit improvement to Klarna in 2024
Die, phone menu trees, die!
Jeff Bezos, Nvidia and Microsoft are betting a humanlike robot could be one of the hot new developments in artificial intelligence.
They are among a group of investors, including OpenAI, that invested $675 million in an AI robotics company called Figure, the startup said Thursday, valuing it at $2.6 billion.
Figure, founded in 2022, is working on a faceless robot that can do tasks in the place of humans. The Sunnyvale, Calif.-based company said this month that its robots could complete a real-world task like moving a box onto a conveyor belt.
I have said before that robots can benefit from the improved human-computer interface represented by chatbots. But I am not convinced that the humanoid form factor is the optimal approach. I will have more to say about this in a separate essay.
substacks referenced above:
@
@
So, I think it's worth noting that at least for Gemini images, it's not a matter of juicing the training data and then sitting back and let the model do its thing. Rather, Gemini rewrites your prompt to make it a woke prompt and serves you the output of that. Zev has most of the gory details (https://thezvi.substack.com/p/gemini-has-a-problem) but essentially it adds keywords and instructions that make your prompt sound like it comes from the DEI office at a university trying to produce a wholesome image of a diverse student body for a promotional ad *for every single prompt*, even if you beg for e.g. historical accuracy.
Regarding humanoid robots: Some say that the development of aeronautics was delayed for centuries by the belief that the wings of a flying machine had to flap like a bird's wings. It could be that the development of efficient robots is being held back by our quaint belief that they need to look like us. The real benefits of robotics may come not from generalist humanoid-appearing machines, but through super-specialized robots integrated with systems and networks in novel configurations with no counterparts in natural world.