LLM Links, 9/26/2024
Ethan Mollick says scaling laws make the latest developments exciting; The Zvi is more skeptical; Scott Alexander asks what AI's have to do to alarm us; Mark McNeilly on Larry Ellison's vision
It turns out that inference compute - the amount of computer power spent “thinking” about a problem, also has a scaling law all its own. This “thinking” process is essentially the model performing multiple internal reasoning steps before producing an output, which can lead to more accurate responses…
The existence of two scaling laws - one for training and another for "thinking" - suggests that AI capabilities are poised for dramatic improvements in the coming years.
What Mollick calls “thinking,” others call Chain of Thought.
With CoT it is now able to try out basically all the tools in its box in all of the ways, follow up, and see which ones work towards a solution. That’s highly useful, but when there is a higher ‘level’ of creativity required, this won’t work.
But doesn’t a human try out basically all the tools in its box? I think of creativity as finding new combinations of ideas, broadly defined. A musical style is an idea. Ingredients in recipes are ideas. I don’t know what this “higher ‘level’ of creativity” consists of, or why a computer cannot attain it.
GPT-4 can create excellent art and passable poetry, but it’s just sort of blending all human art into component parts until it understands them, then doing its own thing based on them. AlphaGeometry can invent novel proofs, but only for specific types of questions in a specific field, and not really proofs that anyone is interested in. AlphaFold solved the difficult scientific problem of protein folding, but it was “just mechanical”, spitting out the conformations of proteins the same way a traditional computer program spits out the digits of pi. Apparently the youth have all fallen in love with AI girlfriends and boyfriends on character.ai, but this only proves that the youth are horny and gullible.
The pattern that he noticed is: First, we say that computers will never be able to do X. Then a computer does X. Then we rationalize that the doing X is no big deal. Our rationalizations allow AI to sneak up on us. He concludes,
it sure does make it hard to draw a red line.
On Thursday, Oracle co-founder Larry Ellison shared his vision for an AI-powered surveillance future during a company financial meeting, reports Business Insider.
… a scenario where AI models would analyze footage from security cameras, police body cams, doorbell cameras, and vehicle dash cams.
"Citizens will be on their best behavior because we are constantly recording and reporting everything that's going on," Ellison said
The thing is, there are so many laws and social norms that we don’t want to see strictly enforced. We don’t want somebody to get a ticket every time they go one mile-per-hour over the speed limit. So apart from the ick factor, widespread surveillance to detect infractions would have chaotic effects unless laws were carefully rewritten to somehow spell out the behavior that we really want.
substacks referenced above: @
@
@
@
The police don’t even follow up with most crime right now. Property crime like your house or car getting broken into, even if you have clear video of the person? Forget it.
So imagine how the entire system would collapse if it effectively recorded every crime, everywhere.
What would actually happen is no enforcement most of the time. And then one day, if you were in the out group of who ever was in power, or if the people in charge wanted to lean on you to do something else, they’d charge you for all the previous crimes at once so you’d have a scary sentence, forcing you to collaborate.
In East Germany they only had the technical capability of tapping maybe 10 or 20 phones at once and look at all the data they had at the end when the system collapsed. Everyone has the right now to look at their file and see which of their neighbors snitched on them, or vice versa if they were the snitch.
You can’t arrest everyone. If that becomes obvious, then I think crime would skyrocket.
Supposedly it’s a power law where some small group of people commit the vast majority of serious crimes, but we don’t even acknowledge that and put those obvious psychopaths away for 20 years or whatever to keep the regular people safe.
Re: "widespread surveillance to detect infractions would have chaotic effects unless laws were carefully rewritten to somehow spell out the behavior that we really want."
A deep point!
It invites fresh thinking/research about fundamental topics:
• Rules vs discretion
• Causal relations between laws and social norms
• (Economic) theories of incomplete contracts
• Judicial review
Etc