In My Tribe

Share this post

GPT/LLM links, 5/9

arnoldkling.substack.com

GPT/LLM links, 5/9

training call centers? LLM's near asymptote? Jacob Buckman says FOOM is far; Frederick R. Prete agrees; Tim B. Lee and Razib Khan; Sam Altman and Bari Weiss; Freddie deBoer on hype; Lee Bressler

Arnold Kling
May 9, 2023
15
Share this post

GPT/LLM links, 5/9

arnoldkling.substack.com
7

A paper by Brynjolfsson and others has received some notice. Noah Smith writes,

In other words, for customer support people who can already do their jobs well, AI provides little or no benefit. But for those who are normally pretty bad at their jobs, or are new on the job, the AI tool boosts their skills immensely.

If I’m running a call center, I’m not going to be using AI to level up the weakest employees. It can’t be that hard to train the AI itself to get on the line and in an Indian or Philipino accent say, “I’m George. What can I do for you today?”

Lewis White reports,

Speaking to an audience at the Massachusetts Institute of Technology, Altman explained that AI development is already reaching a massive wall. While AI improvements have resulted in huge quality boosts to ChatGPT, Stable Diffusion and more, they are reaching their end.

Pointer from Zvi Mowshowitz. This would not surprise me. My intuition has been that Chatbots will reach an asymptote that is well shy of Artificial General Intelligence.

But I still think they will be really important. Even if they stop improving tomorrow (and I’m guessing that they actually will continue to improve, maybe not as fast as they have in the past few years), there will be opportunities to combine them with other software in powerful new ways.

Most of all, I think that there are big gains from figuring out how to use the new AI tools. I think right now people are way over-estimating their value as research assistants and way under-estimating their value as conversation simulators. If Russ Roberts and John Papolo can entertain millions with a staged video of Keynes and Hayek debating in rap format, imagine what you can do by allowing people to interact with simulated Keynes and Hayek.

In another pointer from the Zvi, Max Tegmark writes,

I invite carbon chauvinists to stop moving the goal posts and publicly predict which tasks AI will never be able to do.

Let’s restate the problem: come up with a task that some humans can do that an AI will not be able to do in this century.

The AI skeptics, like myself, do not win by saying that an AI will never be able to fly into the center of the sun. Humans cannot do that.

On the other hand, the AI doomers do not win by raising some remote possibility and saying, “Haha! You can’t say that would never happen.” Let’s replace “never” with “in this century.”

Here are some tasks that humans can do that I am skeptical an AI will be able to do this century: describe how a person smells; start a dance craze; survive for three months with no electrical energy source; come away from a meditation retreat with new insights; use mushrooms or LSD to attain altered consciousness; start a gang war.

Jacob Buckman writes,

a fast-takeoff scenario requires an AI that is both able to learn from data and choose what data to collect. We’ve certainly seen massive progress on the former, but I claim that we’ve seen almost no progress on the latter.

Pointer from Tyler Cowen.

I have trouble understanding his essay. Perhaps he is saying that current AI models leave some spaces unexplored. In terms of chess, there might be a position that is legally possible to arrive at but no humans have ever arrived at. If you let two AI’s play against each other enough times, they might explore that space. But an AI alone just using the existing database will not explore it. A general AI will need a way to get into unexplored spaces that are worth exploring without wasting effort on spaces that are not worth exploring (chess positions that cannot be legally arrived at). That is not a task that researchers are close to figuring out.

Frederick R. Prete writes,

if you can’t adequately model a phenomenon mathematically, you can’t duplicate it with AI. Full Stop. And the reason we can’t adequately model human intelligence is that the underlying neural networks are unpredictably complex. Much of the current panic about an imminent AI apocalypse results from a failure to appreciate this fact.

I think it is possible to celebrate recent progress while appreciating how far we still have to go. Imagine that getting to an artificial general intelligence is a journey of 100 steps. I think it would be generous to say that the developments of the past twelve months have taken us from step 2 to step 3. But people are writing as if we have gone from step 40 to step 60.

Razib Khan interviews our friend Timothy B. Lee. I mostly agree with Lee’s anti-doomer stance.

In an interview with Bari Weiss, Sam Altman says,

instead say, “Is software going to help us create better,” or “Is software going to help us do menial tasks better, or is it going to help us do science better?” And the answer, of course, is all of those things. If we understand AI as just really advanced software, which I think is the right way to do it, then the answers may be a little less mysterious. 

Freddie deBoer writes,

This period of AI hype is among the most intellectually irresponsible and wildly conformist that I’ve ever seen.

It’s not just AI. Commentary in general serves to exaggerate the significance of near-term issues. Every time the Fed Open Market Committee meets, journalists write as if the future of the economy hangs in the balance. No one ever files a story, “The Federal Reserve meeting tomorrow isn’t going to be a big deal one way or the other.” No one ever files a story “The next Presidential election is going to be uninteresting and inconsequential.”

Note that Freddie will be a guest for live Zoom subscribers on Monday evening, May 15.

Lee Bressler writes,

Walled gardens of user-generated data are going to be extremely valuable.  Those datasets could be social networks like Reddit or LinkedIn, they could be music libraries, they could be collections of books.  The companies that control that intellectual property and have the right to license it will be able to make billions in many cases.  I would guess that the value of certain social network data will be worth more than the current enterprise value of some of the businesses.

With these models trying to scarf up data in order to be able to imitate folks and create mash-ups, all sorts of questions are going to arise about who owns what.

Share

Substacks referenced above:

@

Razib Khan's Unsupervised Learning
Timothy B. Lee: don't rage against the machine
Listen now (49 min) | A few years ago now, Razib talked to Tim Lee about his new Substack Full Stack Economics, which featured deep dives into economic issues (as well as some on-the-ground-reporting, like when he drove Lyft to get a feel for its economics). But recently, Lee decided to put…
Listen now
10 months ago · Razib Khan

@

As Clay Awakens
We Aren't Close To Creating A Rapidly Self-Improving AI
When discussing artificial intelligence, a popular topic is recursive self-improvement. The idea in a nutshell: once an AI figures out how to improve its own intelligence, it might be able to bootstrap itself to a god-like intellect, and become so powerful that it could wipe out humanity. This is sometimes called…
Read more
10 months ago · 7 likes · 2 comments · Jacob Buckman

@

Don't Worry About the Vase
AI #9: The Merge and the Million Tokens
There were two big developments this week. One is that Google merged Google Brain and DeepMind into the new Google DeepMind. DeepMind head and founder Dennis Hassabis is in charge of the new division. We will see how this plays out in practice, seems very good for Google…
Read more
10 months ago · 7 likes · 17 comments · Zvi Mowshowitz

Substacks referenced above:

@

Noahpinion
Four interesting econ stories
Housekeeping notes: You may have noticed a couple of changes to Noahpinion! First of all, as promised, I now have my own custom domain name, www.noahpinion.blog. All of the old .substack.com URLs for my old posts will forward to new URLs at the new domain. Second of all, the author picture at the top of each post has been changed! It used to be a portrait of William Butler Yeats; now it’s an actual picture of me…
Read more
10 months ago · 38 likes · 14 comments · Noah Smith
15
Share this post

GPT/LLM links, 5/9

arnoldkling.substack.com
7
Share
7 Comments
Share this discussion

GPT/LLM links, 5/9

arnoldkling.substack.com
Maximum Liberty
May 9, 2023

Dance craze and gang war seem likeliest.

And that was a sentence I never expected to write.

Expand full comment
Reply
Share
founding
JG
May 10, 2023

Re AGI: I tend to want the answer to a slightly different question: what purely mental task can someone with an IQ of 80 do that AI won’t do in the next decade.

Already, it’s quite difficult to find something for that list *right now*. Of course there are certainly mental things people with IQs of 150 can do that AI can’t, and may not do for a while. And there are physical things – and things that require specific biological components, like smelling or feeling the effect of drugs – that people with an IQ of 80 can do that AI can’t.

But GPT-4 class machines are already “smarter” (in whatever way you want to define that term) than the lower 1/3 of the population *in every possible way*. They’re also “smarter” than people with IQs of 150 in *some* ways. But by focusing entirely on the things geniuses can do that AI can’t, we’re missing the most important development of the past 6 months – fully human level intelligence.

Expand full comment
Reply
Share
5 more comments...
Top
New
Community

No posts

Ready for more?

© 2024 Arnold Kling
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing