LLM links, 2/27
Berkeley experts on compound AI; The Zvi on Google's latest; The Zvi on Sora; J. Rayner-Hilles and Trent Sullivan make doomer predictions about an AI computer virus
We define a Compound AI System as a system that tackles AI tasks using multiple interacting components, including multiple calls to models, retrievers, or external tools. In contrast, an AI Model is simply a statistical model, e.g., a Transformer that predicts the next token in text.
Our observation is that even though AI models are continually getting better, and there is no clear end in sight to their scaling, more and more state-of-the-art results are obtained using compound systems.
It is surprising how far that LLMs have gotten using statistical pattern-matching alone. Incorporating other methods seems like it would be a profitable path to try.
Pointer from Alexander Kruel
you make the request easy to make, by allowing context to be provided ‘for free’ via dumping tons of stuff on the model including video and screen captures, you are in all sorts of new business.
What happens when context is no longer that which is scarce?
There are also a lot of use cases that did not make sense before, and do make sense now. I suspect this includes being able to use the documents as a form of training data and general guidance much more effectively. Another use case is simply ‘feed it my entire corpus of writing,’ or other similar things,
Yes, I am looking forward to being able to do this, so that anyone can ask “Arnold Kling” for analysis 24/7.
Brett Goldstein here digs into what it means that Sora works via ‘patches’ that combine to form the requested scene.
If you follow the link, Goldstein explains that a “patch” does for space and time in a video what a “token” does for text.
Be sure to visit Zvi’s post and scroll down to the predictions for 2025.
J. Rayner-Hilles and Trent Sullivan write,
In the next few years, the term “AI computer virus” will become as prominent in public discussion as terms like “COVID-19”, “climate change” or “war with Russia”.
The AI computer virus is an existential threat to the modern world. When it emerges, there will be a frantic effort to try to close Pandora’s box and bolster our technological defences. But once we reach the stage of mass panic – as we behold much of the world’s technological infrastructure shutting down or going offline – it will surely be too late to do anything.
Have a nice day.
substacks referenced above:
@
@
@
Apparently Google makes our public schools educational material on their laptop. I'm not inspired by the Your Ladies Illustrated Primer that they would come up with to teach my daughter about the world.
There was a debate on Noah Smith's sub stack about whether Google LLM was just dumb on race or if the race stuff was just a subset of Orwellian leftist logic. After all, it's pointing out that Elon Musk might be worse then Hitler is at least as much of a red flag as Asian Stormtroopers.
What I would say is that such a worldview in children's literature is not something that is unique to tech. When I go to my kids local library there is at least one propaganda book aimed at children on display. The last time is was "What was the Berlin Wall?" which while noting that the wall was designed to kill people that wanted to leave East Berlin, didn't want to be so judge about communism. After all it points out, Communism gave full employment to women! That and a few other leftist talking points means that probably most people in East Berlin were happy with their situation.
This book BTW is part of a history series you will find in nearly every library and kids book store.
Ultimately, these LLMs use the content and logic of our own ideologies. The ugliness was always there, you just couldn't pull it up on demand and copy paste the memes on twitter.
Oof. That doesn't sound good. "AI-assisted hacker" could cause just as much trouble as AI viruses I would think, too.