GPT/LLM links, 5/9
training call centers? LLM's near asymptote? Jacob Buckman says FOOM is far; Frederick R. Prete agrees; Tim B. Lee and Razib Khan; Sam Altman and Bari Weiss; Freddie deBoer on hype; Lee Bressler
A paper by Brynjolfsson and others has received some notice. Noah Smith writes,
In other words, for customer support people who can already do their jobs well, AI provides little or no benefit. But for those who are normally pretty bad at their jobs, or are new on the job, the AI tool boosts their skills immensely.
If I’m running a call center, I’m not going to be using AI to level up the weakest employees. It can’t be that hard to train the AI itself to get on the line and in an Indian or Philipino accent say, “I’m George. What can I do for you today?”
Speaking to an audience at the Massachusetts Institute of Technology, Altman explained that AI development is already reaching a massive wall. While AI improvements have resulted in huge quality boosts to ChatGPT, Stable Diffusion and more, they are reaching their end.
Pointer from Zvi Mowshowitz. This would not surprise me. My intuition has been that Chatbots will reach an asymptote that is well shy of Artificial General Intelligence.
But I still think they will be really important. Even if they stop improving tomorrow (and I’m guessing that they actually will continue to improve, maybe not as fast as they have in the past few years), there will be opportunities to combine them with other software in powerful new ways.
Most of all, I think that there are big gains from figuring out how to use the new AI tools. I think right now people are way over-estimating their value as research assistants and way under-estimating their value as conversation simulators. If Russ Roberts and John Papolo can entertain millions with a staged video of Keynes and Hayek debating in rap format, imagine what you can do by allowing people to interact with simulated Keynes and Hayek.
In another pointer from the Zvi, Max Tegmark writes,
I invite carbon chauvinists to stop moving the goal posts and publicly predict which tasks AI will never be able to do.
Let’s restate the problem: come up with a task that some humans can do that an AI will not be able to do in this century.
The AI skeptics, like myself, do not win by saying that an AI will never be able to fly into the center of the sun. Humans cannot do that.
On the other hand, the AI doomers do not win by raising some remote possibility and saying, “Haha! You can’t say that would never happen.” Let’s replace “never” with “in this century.”
Here are some tasks that humans can do that I am skeptical an AI will be able to do this century: describe how a person smells; start a dance craze; survive for three months with no electrical energy source; come away from a meditation retreat with new insights; use mushrooms or LSD to attain altered consciousness; start a gang war.
a fast-takeoff scenario requires an AI that is both able to learn from data and choose what data to collect. We’ve certainly seen massive progress on the former, but I claim that we’ve seen almost no progress on the latter.
Pointer from Tyler Cowen.
I have trouble understanding his essay. Perhaps he is saying that current AI models leave some spaces unexplored. In terms of chess, there might be a position that is legally possible to arrive at but no humans have ever arrived at. If you let two AI’s play against each other enough times, they might explore that space. But an AI alone just using the existing database will not explore it. A general AI will need a way to get into unexplored spaces that are worth exploring without wasting effort on spaces that are not worth exploring (chess positions that cannot be legally arrived at). That is not a task that researchers are close to figuring out.
if you can’t adequately model a phenomenon mathematically, you can’t duplicate it with AI. Full Stop. And the reason we can’t adequately model human intelligence is that the underlying neural networks are unpredictably complex. Much of the current panic about an imminent AI apocalypse results from a failure to appreciate this fact.
I think it is possible to celebrate recent progress while appreciating how far we still have to go. Imagine that getting to an artificial general intelligence is a journey of 100 steps. I think it would be generous to say that the developments of the past twelve months have taken us from step 2 to step 3. But people are writing as if we have gone from step 40 to step 60.
Razib Khan interviews our friend Timothy B. Lee. I mostly agree with Lee’s anti-doomer stance.
In an interview with Bari Weiss, Sam Altman says,
instead say, “Is software going to help us create better,” or “Is software going to help us do menial tasks better, or is it going to help us do science better?” And the answer, of course, is all of those things. If we understand AI as just really advanced software, which I think is the right way to do it, then the answers may be a little less mysterious.
This period of AI hype is among the most intellectually irresponsible and wildly conformist that I’ve ever seen.
It’s not just AI. Commentary in general serves to exaggerate the significance of near-term issues. Every time the Fed Open Market Committee meets, journalists write as if the future of the economy hangs in the balance. No one ever files a story, “The Federal Reserve meeting tomorrow isn’t going to be a big deal one way or the other.” No one ever files a story “The next Presidential election is going to be uninteresting and inconsequential.”
Note that Freddie will be a guest for live Zoom subscribers on Monday evening, May 15.
Walled gardens of user-generated data are going to be extremely valuable. Those datasets could be social networks like Reddit or LinkedIn, they could be music libraries, they could be collections of books. The companies that control that intellectual property and have the right to license it will be able to make billions in many cases. I would guess that the value of certain social network data will be worth more than the current enterprise value of some of the businesses.
With these models trying to scarf up data in order to be able to imitate folks and create mash-ups, all sorts of questions are going to arise about who owns what.
Substacks referenced above:
@
@
@
Substacks referenced above:
@
Dance craze and gang war seem likeliest.
And that was a sentence I never expected to write.
Re AGI: I tend to want the answer to a slightly different question: what purely mental task can someone with an IQ of 80 do that AI won’t do in the next decade.
Already, it’s quite difficult to find something for that list *right now*. Of course there are certainly mental things people with IQs of 150 can do that AI can’t, and may not do for a while. And there are physical things – and things that require specific biological components, like smelling or feeling the effect of drugs – that people with an IQ of 80 can do that AI can’t.
But GPT-4 class machines are already “smarter” (in whatever way you want to define that term) than the lower 1/3 of the population *in every possible way*. They’re also “smarter” than people with IQs of 150 in *some* ways. But by focusing entirely on the things geniuses can do that AI can’t, we’re missing the most important development of the past 6 months – fully human level intelligence.