LLM Links, 1/17
The Zvi on Tyler Cowen on AI and productivity; Ethan Mollick on AI hype; Bill Gates on what humans can still do; Google's daily listen
[Tyler Cowen] assumes that intelligence can’t be used to convince us to overcome all these regulatory barriers and bottlenecks. Whereas I would expect that raising the intelligence baseline greatly would make it clear to everyone involved how painful our poor decisions were, and also enable improved forms of discourse and negotiation and cooperation and coordination, and also greatly favor those that embrace it over those that don’t, and generally allow us to take down barriers. Tyler would presumably agree that if we were to tear down the regulatory state in the places it was holding us back, that alone would be worth far more than his 0.5% of yearly GDP growth, even with no other innovation or AI.
James Cham dubs the opposing forces as the California tech folks wanting to do stuff vs. the Yale lawyers who want to stop stuff from being done. Is it necessarily the case that AI will defeat Yale?
Zvi also says,
while I do think the diffusion experts are pointing to real issues that will importantly slow down adaptation, and indeed we are seeing what for many is depressingly slow apadation, they won’t slow it down all that much, because this is fundamentally different. AI and especially workers ‘adapt themselves’ to a large extent, the intelligence and awareness involved is in the technology itself, and it is digital and we have a ubiquitous digital infrastructure we didn’t have until recently.
It is also way too valuable a technology, even right out of the gate on your first day, and you will start to be forced to interact with it whether you like it or not, both in ways that will make it very difficult and painful to ignore. And the places it is most valuable will move very quickly. And remember, LLMs will get a lot better.
My own experience illustrates slow diffusion. I have a hard time figuring out which new tools to use, and I feel like I am way behind the frontier in terms of applying AI. But all of my friends are spending much less time than I spend trying to keep up with what is happening. The “not evenly distributed” part of this future seems to me like a big deal.
even assuming researchers are right about reaching AGI in the next year or two, they are likely overestimating the speed at which humans can adopt and adjust to a technology. Changes to organizations take a long time. Changes to systems of work, life, and education, are slower still. And technologies need to find specific uses that matter in the world, which is itself a slow process. We could have AGI right now and most people wouldn’t notice (indeed, some observers have suggested that has already happened, arguing that the latest AI models like Claude 3.5 are effectively AGI).
Bill Gates, the co-founder of Microsoft, is one of the most vocal proponents of AI’s potential to transform the job market. He envisions a future where automation takes over routine tasks, leaving humans to engage in more creative and meaningful work.
With all due respect to Bill Gates, I am not sure he grasps the new AI. And I should point out that I launched a business on the Web while he was still an Internet skeptic.
Anyway, I do not think that the routine vs. creative distinction is what will drive what is done with AI vs. what humans do. I think that everyone who says “AI will not be creative—that is for humans” could turn out to be wrong.
The safest jobs are those protected by regulation. Part of where I disagree with Zvi (above) is that I think that those regulatory protections will remain in place or even broaden going forward. Did you read how the dockworkers demanded that no new automation be undertaken at ports?
A new app from Google:
An experimental audio show from Discover, made just for you with AI. Each day, you’ll get a quick update on the things you care about and follow, curated from sources across the web. You’ll also get links to related stories so you can easily explore and learn more. This feature is still in early testing, so please share your feedback to help improve and shape the future of the show.
Pointer from Rowan Cheung. This is a typical example of something that might be really great for me, but I don’t have the time/motivation to try it, at least now.
I need an AI that automates the process for enabling me to adopt AI.
substacks referenced above:@
@
As always, there was a lot to think about in today's post, but for me, this was the main takeaway: "The safest jobs are those protected by regulation." I believe this prediction is correct and worthy of its own post.
“I have a hard time figuring out which new tools to use, and I feel like I am way behind the frontier in terms of applying AI.” Fascinating comment. I’m curious what problems you’re trying to solve with AI other than the Grader and Mentor. I’m not sure what you’re up to, but I’m curious if you’re keeping track of your progress in a way that would inform you of which direction to take? It’s possible that no off-the-shelf tools will satisfy you, correct? In this case you might consider building your own tool. How difficult would this be?
I’m not sure that anyone cares, but I have relatively little interest in AI compared to Arnold. It’s interesting how some people are interested in certain technologies and others are not. In my case, I suppose this lack of interest is because I have no big nagging problem that I see AI solving. This is probably because I’m so ignorant of the technology. In fact, I might be more of an anti-AI guy in that I like to do non-routine things myself. Sure, I could use an AI lawnmower, but that takes work away from my kids. Sure, I would like to have a driverless car, if it were the same price as a regular car. The robotic vacuum we own is nice, but still sort of clumsy and slow. It’s really not difficult to vacuum, and when we need to vacuum, like after dinner, we just use the dumb vacuum.
I suppose that if I had an AI clone of Milton Friedman, I could use that to persuade people to read and understand the First Amendment, but Milton wasn’t all that successful at that, and I’m not certain why. I suppose an AI version of Milton could be harnessed to teach millions of children about the First Amendment, but I doubt many kids would be interested. I suppose I could try to create a video game that simultaneously taught about the First and Tenth Amendment. Are there video games that use AI?