Berkeley experts on compound AI; The Zvi on Google's latest; The Zvi on Sora; J. Rayner-Hilles and Trent Sullivan make doomer predictions about an AI computer virus
Apparently Google makes our public schools educational material on their laptop. I'm not inspired by the Your Ladies Illustrated Primer that they would come up with to teach my daughter about the world.
There was a debate on Noah Smith's sub stack about whether Google LLM was just dumb on race or if the race stuff was just a subset of Orwellian leftist logic. After all, it's pointing out that Elon Musk might be worse then Hitler is at least as much of a red flag as Asian Stormtroopers.
What I would say is that such a worldview in children's literature is not something that is unique to tech. When I go to my kids local library there is at least one propaganda book aimed at children on display. The last time is was "What was the Berlin Wall?" which while noting that the wall was designed to kill people that wanted to leave East Berlin, didn't want to be so judge about communism. After all it points out, Communism gave full employment to women! That and a few other leftist talking points means that probably most people in East Berlin were happy with their situation.
This book BTW is part of a history series you will find in nearly every library and kids book store.
Ultimately, these LLMs use the content and logic of our own ideologies. The ugliness was always there, you just couldn't pull it up on demand and copy paste the memes on twitter.
More cool AI links, making me go back to Ribbonfarm (Rao, Muddled Agents Venn) to help remind me how so many concepts are interrelated.
I'm sure that in the future, the most accurate AI will be using modular sub-ai tools, a compound AI system, to achieve maximum performance, like the BAIR (fine acronym!) post explains. Tho I'm not sure it will be the cheapest "good enough" system, nor the most user friendly, nor the fastest. And certainly there will be work on being Woke/pc or not -- with more folk understanding the woke crap is false, at times, but having so much of it around might make most folk think it's true.
The Aporia fear of an AI virus seems more likely/ frightening than the Skynet-Terminator-Berserker killing robots. The BAIR paper notes an increased security risk of compound system training.
I continue looking for evidence for and against my current belief that anything humans can do on a screen/ with a computer, like this comment I'm writing, an AI will be able to do. And for 90+%, better.
Lyn Alden used ai to generate visuals for Four Monies that was quick and interesting:
Apparently Google makes our public schools educational material on their laptop. I'm not inspired by the Your Ladies Illustrated Primer that they would come up with to teach my daughter about the world.
There was a debate on Noah Smith's sub stack about whether Google LLM was just dumb on race or if the race stuff was just a subset of Orwellian leftist logic. After all, it's pointing out that Elon Musk might be worse then Hitler is at least as much of a red flag as Asian Stormtroopers.
What I would say is that such a worldview in children's literature is not something that is unique to tech. When I go to my kids local library there is at least one propaganda book aimed at children on display. The last time is was "What was the Berlin Wall?" which while noting that the wall was designed to kill people that wanted to leave East Berlin, didn't want to be so judge about communism. After all it points out, Communism gave full employment to women! That and a few other leftist talking points means that probably most people in East Berlin were happy with their situation.
This book BTW is part of a history series you will find in nearly every library and kids book store.
Ultimately, these LLMs use the content and logic of our own ideologies. The ugliness was always there, you just couldn't pull it up on demand and copy paste the memes on twitter.
Oof. That doesn't sound good. "AI-assisted hacker" could cause just as much trouble as AI viruses I would think, too.
More cool AI links, making me go back to Ribbonfarm (Rao, Muddled Agents Venn) to help remind me how so many concepts are interrelated.
I'm sure that in the future, the most accurate AI will be using modular sub-ai tools, a compound AI system, to achieve maximum performance, like the BAIR (fine acronym!) post explains. Tho I'm not sure it will be the cheapest "good enough" system, nor the most user friendly, nor the fastest. And certainly there will be work on being Woke/pc or not -- with more folk understanding the woke crap is false, at times, but having so much of it around might make most folk think it's true.
The Aporia fear of an AI virus seems more likely/ frightening than the Skynet-Terminator-Berserker killing robots. The BAIR paper notes an increased security risk of compound system training.
https://arxiv.org/pdf/2309.05610.pdf
I continue looking for evidence for and against my current belief that anything humans can do on a screen/ with a computer, like this comment I'm writing, an AI will be able to do. And for 90+%, better.
Lyn Alden used ai to generate visuals for Four Monies that was quick and interesting:
https://www.lynalden.com/the-four-monies/