10 Comments
20hEdited

All the people I know who have low estimates for AI disruption follow the same pattern. When they wanted to test if the hype was real and put AI to the test, they figured that they would best be able to judge a case in which they themselves had the most domain expertise, usually at least the 999th millentile of the overall population. That doesn't mean they are super smart people, specialization in a highly diverse market means that specialists in any particular subject - even for ones where the cognitive threshold is low - are always a tiny minority.

Well, what they show me is that the AI can only operate at the 997th or 998th millentile, which from their perspective is not impressive at all. It's apparently very difficult for a 995th millentile person to explain to these experts, "Actually, wow, from my perspective, that's pretty darn impressive, and given that it's doing it for basically free compared to what it would cost for me to do it, and getting better fast, kind of scary" let alone how impressive and scary it would seem to an average or lower than average person.

Expand full comment

Arnold, I've been involved with Agent GPT for a while. Agent LLM is really not meant for one-off tasks, in my experience and opinion. Better to give it something like 'scan recently released books weekly for things I might like, recommend one to me, provide key passages or reviews that gave you the sense I might or might not like it, ask my opinion of the materials, and again of the book if I choose to read it. Once weekly revisit the books from the past year to recommend another with a revised idea of my taste. Watch for my tastes to change somewhat over time. Consider how long it will take for me to read the book and possibly suggest conversation partners or correspondents who might also appreciate each book.'

Expand full comment

Perplexity.ai is now offering a US-hosted DeepSeek, FYI.

Expand full comment

My theory of learning and living in the world basically amounts to "motivation wins." As impressive as the LLMs are, I still haven't found them to be particularly effective at stimulating or amplifying motivation. This is not to undersell the possibilities. There seems like a lot of potential for these systems to further expose, if not fully eliminate, many of the inefficiencies that frustrate our work in the symbolic world; however, using Arnold's request: what if Claude could make good book recommendations? Has he already cleared his backlog of books he thinks would be worth reading? Are Noah Smith or Tyler Cowen's recommendations insufficiently inspiring? What would it take to "believe" Claude more than your friends or present intellectual heroes? I think it is possible that we could get there, but if we do, I don't think the Arnold that wants to trust Claude would much care for the Arnold who actually trusted Claude. That is the possible disruption that concerns me the most.

Expand full comment

Noah Carl observes:

“Another negative consequence of AI is the loss of meaning it may engender. Bo Winegard and I already discussed this extensively in our essay responding to Steven Pinker. The gist of our argument is that humans don’t just value the products of our intellect; we also value the process of applying our intellect. So far from enhancing our well-being, a world in which future civilisational advancements are largely automated could give rise to profound ennui.

I haven’t even discussed the negative consequences AI could have for human relationships.”

This certainly seems to be fertile grounds for exploration.

Initially, one might be tempted to dismiss Carl’s concern. After all, Isaiah Berlin wrote of “the complexity and insolubility of the central problems." These are not going away with or without AI. And as Hannah Arendt wrote “Each new generation, every new human being, as he becomes conscious of being inserted between an infinite past and an infinite future, must discover and ploddingly pave anew the path of thought.” And AI can of course never substitute for this most personal and inward of journeys.

But yet, as Carl might be intimating, the personal is also social. Without the peronal experience of relationships such as parenting, friendship, team play, labor solidarity etc., an individual can hardly realize their own humanity. Will the powers that be attempt to substitute AI for such social experience? Use Arendt as an example. Probably a lot of what made her the unique individual that she was, happened because her mother Martha Cohn Arendt, as she relates in her book Unser Kind, raised Hannah in a manner inspired by Goethe, with Martha striving to instill in Hannah self-discipline, constructive harnessing of passion, and the moral obligation to take responsibility for others. In all year or two, nothing will be simpler than having an AI substitute robot parent perform as a Mother Martha. But of what value would the simulacrum humanity produced have? Fahrenheit 451 is still popular for a reason. Granger and his group must memorize literature to keep it alive. Putting the world’s literature on a thumb drive does not solve the problem.

Bakunin seems to take on a fresh new relevance in these musings. “The inherent principles of human existence are summed up in the single law of solidarity. This is the golden rule of humanity, and may be formulated thus: no person can recognize or realize his or her own humanity except by recognizing it in others and so cooperating for its realization by each and all. No man can emancipate himself save by emancipating with him all the men about him.” Thus, the displacement of social relationships by AI will impoverish as well as enrich. Those concerned about their own humanity must be concerned about the humanity of their friends and associates. The formation of small groups of family and friends capable of sustaining themselves outside of AI control might well determine whether humanity progresses or devolves into an insect like existance.

Expand full comment

“I think that this will take a while to play out.” And the longer it takes, the more easily society will adjust. The prospect is exciting, but, as an old person, I expect not to face much quotidian disruption.

Expand full comment

What I want is an operator / agent / servant / slave to digitally do my work for me, with mostly just me telling it what I want. But now being retired, I don’t have so much real work.

Make me a better person, without much work, or lifestyle change. Effortlessly learn Slovak grammar. Sing better (karaoke).

On an ai digi-asst., I could use a good maker of small programs, and especially an AGI-guru to tell me yes best programs to use to do what I want. Like reviewing my backup files that are dups with different names, or just in different folders. Things I could do myself that I don’t because I’d rather read and comment on blogs.

Expand full comment

Regarding book recommendations, Amazon does this for its customers. Don't know what methods they are using, but they do come up with suggestions of books that are similar in overall theme to what was already purchased. They don't seem to recommend books that bear on the key ideas covered, but are not on the same general theme, but that is just a quick impression on my part.

Expand full comment

Like you, I’ve not been overly impressed with AI agent use cases so far. (“Go shopping for x…”) However, Gemini Deep Research is intriguing enough that I want to try after seeing Zvi’s and Mollick’s reviews.

Expand full comment

Re Noah Carl's point about conservative cope about AI: the cope seems to be a thing on the left, too. Here is Nate Silver writing on the topic: https://www.natesilver.net/p/its-time-to-come-to-grips-with-ai

Expand full comment