All the people I know who have low estimates for AI disruption follow the same pattern. When they wanted to test if the hype was real and put AI to the test, they figured that they would best be able to judge a case in which they themselves had the most domain expertise, usually at least the 999th millentile of the overall population. That doesn't mean they are super smart people, specialization in a highly diverse market means that specialists in any particular subject - even for ones where the cognitive threshold is low - are always a tiny minority.
Well, what they show me is that the AI can only operate at the 997th or 998th millentile, which from their perspective is not impressive at all. It's apparently very difficult for a 995th millentile person to explain to these experts, "Actually, wow, from my perspective, that's pretty darn impressive, and given that it's doing it for basically free compared to what it would cost for me to do it, and getting better fast, kind of scary" let alone how impressive and scary it would seem to an average or lower than average person.
Arnold, I've been involved with Agent GPT for a while. Agent LLM is really not meant for one-off tasks, in my experience and opinion. Better to give it something like 'scan recently released books weekly for things I might like, recommend one to me, provide key passages or reviews that gave you the sense I might or might not like it, ask my opinion of the materials, and again of the book if I choose to read it. Once weekly revisit the books from the past year to recommend another with a revised idea of my taste. Watch for my tastes to change somewhat over time. Consider how long it will take for me to read the book and possibly suggest conversation partners or correspondents who might also appreciate each book.'
My theory of learning and living in the world basically amounts to "motivation wins." As impressive as the LLMs are, I still haven't found them to be particularly effective at stimulating or amplifying motivation. This is not to undersell the possibilities. There seems like a lot of potential for these systems to further expose, if not fully eliminate, many of the inefficiencies that frustrate our work in the symbolic world; however, using Arnold's request: what if Claude could make good book recommendations? Has he already cleared his backlog of books he thinks would be worth reading? Are Noah Smith or Tyler Cowen's recommendations insufficiently inspiring? What would it take to "believe" Claude more than your friends or present intellectual heroes? I think it is possible that we could get there, but if we do, I don't think the Arnold that wants to trust Claude would much care for the Arnold who actually trusted Claude. That is the possible disruption that concerns me the most.
Noah's anti-cope note is excellent: “All right, but apart from answering exam questions, reviewing legal contracts, solving math problems, writing poetry, responding with empathy and coming up with new ideas, what has AI ever done better than humans?”
I haven't needed anybody to do anything digital for me for years -- but my bosses paid me to do a lot of stuff. I'm pretty sure most analysis I did could be done now by aiBots. All "remote work" could be done by aiBots.
All middle management can be done by ai -- maybe that will be a huge game changer when some company implements a 4 level model: CEO & 5-15 VPs; each with 5-15 (-50? -100?) ai-Directors, each with 5-15 ai-Managers who manage 3-20 person (+ai-assist) teams to do the work. And humans to insure the work is good, at first. IBM could try this and save a ton of money.
Sam Altman is saying what I've been believing since chatGPT came out: "Prediction: AI will cause the price of work that can happen in front of a computer to decrease much faster than the price of work that happens in the physical world.
This is the opposite of what most people (including me) expected, and will have strange effects."
Humans haven't had "progress" where above avg intelligence workers were replaced first on a massive scale. Could dominate all other Trump changes in the next 4 years. Everybody will have a copy of the smartest guy in the room doing their little tasks; and for bosses, doing the work they used to pay people to do.
Adoption, once beneficial use is more proven, could be as fast as the upgrade from mobile phone to smart phones. A few years. Once a functioning killer app is done. Slower (but more sure?) if it's just first users getting the smart ai-assist first.
On the human side, will there be ai-Morlocks for human Eloi (HG Wells), who do no work? Or will condos on Earth look like the spaceship from Wall-E full of pudgy soft folk on vacation for all their lives?
I left a similar comment on Noah Carl's essay, but in cases like legal work, I don't see an imminent future where AI will upend these professions protected by occupational licensing because their credentials protect them from replacement. The law will continue requiring lawyers to do legal work themselves (even if they had an AI doing much of it in reality), and it will take decades before people will trust AI systems to do high-consequence work like writing legal contracts without having a credentialed lawyer review it.
The internet had a similar effect to AI in this regard, but it democratized information, not actual results. Someone smart enough to attend an elite law school could train themselves to pass the bar with the available online material. Still, you wouldn't do this in almost any situation because the credential carries so much weight in and of itself.
“I think that this will take a while to play out.” And the longer it takes, the more easily society will adjust. The prospect is exciting, but, as an old person, I expect not to face much quotidian disruption.
What I want is an operator / agent / servant / slave to digitally do my work for me, with mostly just me telling it what I want. But now being retired, I don’t have so much real work.
Make me a better person, without much work, or lifestyle change. Effortlessly learn Slovak grammar. Sing better (karaoke).
On an ai digi-asst., I could use a good maker of small programs, and especially an AGI-guru to tell me yes best programs to use to do what I want. Like reviewing my backup files that are dups with different names, or just in different folders. Things I could do myself that I don’t because I’d rather read and comment on blogs.
Regarding book recommendations, Amazon does this for its customers. Don't know what methods they are using, but they do come up with suggestions of books that are similar in overall theme to what was already purchased. They don't seem to recommend books that bear on the key ideas covered, but are not on the same general theme, but that is just a quick impression on my part.
Like you, I’ve not been overly impressed with AI agent use cases so far. (“Go shopping for x…”) However, Gemini Deep Research is intriguing enough that I want to try after seeing Zvi’s and Mollick’s reviews.
"Without the peronal experience of relationships such as parenting, friendship, team play, labor solidarity etc., an individual can hardly realize their own humanity."
It seems to me that that phrase "their own humanity" is doing a lot of work here. What does it mean? Are those without such things "not human"? Close but not quite human? Lesser human?
It seems to me that you are using that phrase to make a simple moral judgement: people who don't have those things, or who have them in ways I consider quantitatively or qualitatively deficient, are not as good. Maybe not good at all. Wastes of air. Insects.
I think you are on firmer ground if you argue that people will feel bad in such an existence. People will choose what is easier but will wind up dissatisfied. But that makes it an empirical question. And inevitably leads to other questions : was there ever a time when people felt satisfied?; can there ever be? To pull it back to the beginning, Is there some way to say, "no, they didn't feel satisfied, but they were fully human?"
It sounds like the famous (rather infamous) Romanian orphanages where babies were basically just left alone in cribs. The results were not pretty.
I confess I was thinking more about non-babies voluntarily choosing screens over "friendship, team play, labor solidarity etc." Or perhaps more accurately, choosing screens without really thinking about the potential other things they could be doing, the other potential experiences they could be having. Or thinking about them only in a foreign "it might be nice but not worth the effort" way.
All the people I know who have low estimates for AI disruption follow the same pattern. When they wanted to test if the hype was real and put AI to the test, they figured that they would best be able to judge a case in which they themselves had the most domain expertise, usually at least the 999th millentile of the overall population. That doesn't mean they are super smart people, specialization in a highly diverse market means that specialists in any particular subject - even for ones where the cognitive threshold is low - are always a tiny minority.
Well, what they show me is that the AI can only operate at the 997th or 998th millentile, which from their perspective is not impressive at all. It's apparently very difficult for a 995th millentile person to explain to these experts, "Actually, wow, from my perspective, that's pretty darn impressive, and given that it's doing it for basically free compared to what it would cost for me to do it, and getting better fast, kind of scary" let alone how impressive and scary it would seem to an average or lower than average person.
Arnold, I've been involved with Agent GPT for a while. Agent LLM is really not meant for one-off tasks, in my experience and opinion. Better to give it something like 'scan recently released books weekly for things I might like, recommend one to me, provide key passages or reviews that gave you the sense I might or might not like it, ask my opinion of the materials, and again of the book if I choose to read it. Once weekly revisit the books from the past year to recommend another with a revised idea of my taste. Watch for my tastes to change somewhat over time. Consider how long it will take for me to read the book and possibly suggest conversation partners or correspondents who might also appreciate each book.'
Perplexity.ai is now offering a US-hosted DeepSeek, FYI.
My theory of learning and living in the world basically amounts to "motivation wins." As impressive as the LLMs are, I still haven't found them to be particularly effective at stimulating or amplifying motivation. This is not to undersell the possibilities. There seems like a lot of potential for these systems to further expose, if not fully eliminate, many of the inefficiencies that frustrate our work in the symbolic world; however, using Arnold's request: what if Claude could make good book recommendations? Has he already cleared his backlog of books he thinks would be worth reading? Are Noah Smith or Tyler Cowen's recommendations insufficiently inspiring? What would it take to "believe" Claude more than your friends or present intellectual heroes? I think it is possible that we could get there, but if we do, I don't think the Arnold that wants to trust Claude would much care for the Arnold who actually trusted Claude. That is the possible disruption that concerns me the most.
Noah's anti-cope note is excellent: “All right, but apart from answering exam questions, reviewing legal contracts, solving math problems, writing poetry, responding with empathy and coming up with new ideas, what has AI ever done better than humans?”
I haven't needed anybody to do anything digital for me for years -- but my bosses paid me to do a lot of stuff. I'm pretty sure most analysis I did could be done now by aiBots. All "remote work" could be done by aiBots.
All middle management can be done by ai -- maybe that will be a huge game changer when some company implements a 4 level model: CEO & 5-15 VPs; each with 5-15 (-50? -100?) ai-Directors, each with 5-15 ai-Managers who manage 3-20 person (+ai-assist) teams to do the work. And humans to insure the work is good, at first. IBM could try this and save a ton of money.
Sam Altman is saying what I've been believing since chatGPT came out: "Prediction: AI will cause the price of work that can happen in front of a computer to decrease much faster than the price of work that happens in the physical world.
This is the opposite of what most people (including me) expected, and will have strange effects."
Humans haven't had "progress" where above avg intelligence workers were replaced first on a massive scale. Could dominate all other Trump changes in the next 4 years. Everybody will have a copy of the smartest guy in the room doing their little tasks; and for bosses, doing the work they used to pay people to do.
Adoption, once beneficial use is more proven, could be as fast as the upgrade from mobile phone to smart phones. A few years. Once a functioning killer app is done. Slower (but more sure?) if it's just first users getting the smart ai-assist first.
On the human side, will there be ai-Morlocks for human Eloi (HG Wells), who do no work? Or will condos on Earth look like the spaceship from Wall-E full of pudgy soft folk on vacation for all their lives?
I left a similar comment on Noah Carl's essay, but in cases like legal work, I don't see an imminent future where AI will upend these professions protected by occupational licensing because their credentials protect them from replacement. The law will continue requiring lawyers to do legal work themselves (even if they had an AI doing much of it in reality), and it will take decades before people will trust AI systems to do high-consequence work like writing legal contracts without having a credentialed lawyer review it.
The internet had a similar effect to AI in this regard, but it democratized information, not actual results. Someone smart enough to attend an elite law school could train themselves to pass the bar with the available online material. Still, you wouldn't do this in almost any situation because the credential carries so much weight in and of itself.
“I think that this will take a while to play out.” And the longer it takes, the more easily society will adjust. The prospect is exciting, but, as an old person, I expect not to face much quotidian disruption.
What I want is an operator / agent / servant / slave to digitally do my work for me, with mostly just me telling it what I want. But now being retired, I don’t have so much real work.
Make me a better person, without much work, or lifestyle change. Effortlessly learn Slovak grammar. Sing better (karaoke).
On an ai digi-asst., I could use a good maker of small programs, and especially an AGI-guru to tell me yes best programs to use to do what I want. Like reviewing my backup files that are dups with different names, or just in different folders. Things I could do myself that I don’t because I’d rather read and comment on blogs.
Regarding book recommendations, Amazon does this for its customers. Don't know what methods they are using, but they do come up with suggestions of books that are similar in overall theme to what was already purchased. They don't seem to recommend books that bear on the key ideas covered, but are not on the same general theme, but that is just a quick impression on my part.
Like you, I’ve not been overly impressed with AI agent use cases so far. (“Go shopping for x…”) However, Gemini Deep Research is intriguing enough that I want to try after seeing Zvi’s and Mollick’s reviews.
Re Noah Carl's point about conservative cope about AI: the cope seems to be a thing on the left, too. Here is Nate Silver writing on the topic: https://www.natesilver.net/p/its-time-to-come-to-grips-with-ai
"Without the peronal experience of relationships such as parenting, friendship, team play, labor solidarity etc., an individual can hardly realize their own humanity."
It seems to me that that phrase "their own humanity" is doing a lot of work here. What does it mean? Are those without such things "not human"? Close but not quite human? Lesser human?
It seems to me that you are using that phrase to make a simple moral judgement: people who don't have those things, or who have them in ways I consider quantitatively or qualitatively deficient, are not as good. Maybe not good at all. Wastes of air. Insects.
I think you are on firmer ground if you argue that people will feel bad in such an existence. People will choose what is easier but will wind up dissatisfied. But that makes it an empirical question. And inevitably leads to other questions : was there ever a time when people felt satisfied?; can there ever be? To pull it back to the beginning, Is there some way to say, "no, they didn't feel satisfied, but they were fully human?"
It sounds like the famous (rather infamous) Romanian orphanages where babies were basically just left alone in cribs. The results were not pretty.
I confess I was thinking more about non-babies voluntarily choosing screens over "friendship, team play, labor solidarity etc." Or perhaps more accurately, choosing screens without really thinking about the potential other things they could be doing, the other potential experiences they could be having. Or thinking about them only in a foreign "it might be nice but not worth the effort" way.