GPT/LLM links
Ethan Mollick interviewed; Will economic growth accelerate?; John Luttig on investment uncertainties; The Zvi makes several points; Russ Roberts and Pmarca
In a podcast, Ethan Mollick is interviewed. By now, you should know that this is self-recommending.
Recall that I linked to a debate about the potential for rapid economic growth due to AI. There, Tamay Besiroglu says,
I'm referring to a rate of growth that far surpasses anything we’ve previously witnessed — a minimum of tenfold the annual growth rate observed over the past century, sustained for at least a decade.
I am inclined to believe that such explosive growth is not just a possibility, but a probable outcome when we transition to an era where AI automates the vast majority of tasks currently performed by humans. To put this in numbers, I’d currently assign a 65% chance of this happening.
His debate adversary gives this a 10 to 20 percent chance, which still seems high to me.
In a separate article, Arjun Ramani and Zhengdong Wang write,
We think AI can be “transformative” in the same way the internet was, raising productivity and changing habits. But many daunting hurdles lie on the way to the accelerating growth rates predicted by some…
Here is a brief outline of our argument:
The transformational potential of AI is constrained by its hardest problems
Despite rapid progress in some AI subfields, major technical hurdles remain
Even if technical AI progress continues, social and economic hurdles may limit its impact
I don’t believe that we should think of AI as being on a path to match or outdo humans. It is on its own path. Human culture and AI will co-evolve. I would share Ramani and Wang’s perspective. Think of AI as another generation of software, like the Internet or the World Wide Web. It will have a significant effect, but it will stretch out over time.
Pointer from Tyler Cowen.
the AI frontier is being driven by a much smaller fraction of the ecosystem than former technological shifts like the internet and software. .. OpenAI and Anthropic combined employ fewer than 800 people.
…GPT-4 has immense latent productive value that has yet to be mined. Training on proprietary data, reducing inference costs, and building the “last mile” of UX will drive a step-function in model capabilities, unlocking enormous economic surplus.
…I think there are failures of imagination at the product ideation level: it is much easier to modify what’s already working than to think from scratch about what new opportunities exist.
Even with GPT-4 capabilities (not to mention subsequent models), there will be entirely new product form factors that won’t have the formula of incumbent software workflow + AI. They will require radically different user experiences to win customers. This is white space for new companies with embedded counter-positioning against incumbents…
If there aren’t many VC-investable opportunities, then VCs will have no economics in the most important platform shift of the last several decades…
During the early internet era of the mid-90s — and crypto in the early 2010s — very few people showed up in the first few years, but almost all of them made money.
AI seems like the inverse.
The interesting claim is that if the LLM can understand your core thesis well enough to summarize it correctly, that means your idea was conventional, because the LLM would otherwise substitute the nearest conventional thing, ignoring what you actually wrote or didn’t write.
…If it is safe to use LLM-generated summaries of your work, your work does not need to be read.
In another provocative comment, he writes,
Rather than using humans as a metaphor to help us understand AI, often it is more useful to use AI as a metaphor for understanding humans. What data are you (or someone else) being trained on? What is determining the human feedback driving a person’s reinforcement learning? What principles are they using to evaluate results and update heuristics? In what ways are they ‘vibing’ in various contexts, how do they respond to prompt engineering? Which of our systems are configured the way they are due to our severe compute and data limitations, or the original evolutionary training conditions?
Or as I wrote, Where Did You Get Your Political Preferences?
Marc Andreessen talks with Russ Roberts. Marc says,
what happens each time is a small group of people get something to work and then the minute that happens that's like firing the starting gun to be able to get all of these other smart people to to participate paid and then all these other smart people basically come in and they take a look at the technology and they say okay here are the you know 14 things that it's not doing well yet here are the 18 problems that are preventing it from being widely adopted and then they solve those problems um and so I I think the rate of technological improvement from here is going to be very rapid
Yes. And about AI doomerism, he says,
you have a generation of extremely smart people who have kind of thought themselves uh into an apocalypse cult … that anthropomorphizes and so it reads into machines things that aren't there um and then it sort of applies from that therefore the end of the world
That is one of his milder comments about doomerism.
Later, he says,
what the rationalists have done is what the atheists generally do is they they're kind of like okay you know but we can do this better well or we can we think we can think from first principles right like like everything we do in science and technology is based on thinking from first principles and so therefore obviously we can do that in like religion and culture and philosophy
…they would never they would never admit this or and generally they will argue vigorously against it but generally … they're creating a sort of a fake version of Christianity
…there's this risk of like drifting off into cult territory because if you're ungrounded in your construction of new values and you're doing it from scratch and you're very thinly educated on how people done this in the past like it's just hard to see how it goes well
I recommend the entire podcast.
Substacks referenced above:
@
@
"a minimum of tenfold the annual growth rate observed over the past century, sustained for at least a decade"
That statement is pretty ridiculous.
"a minimum of tenfold the annual growth rate observed over the past century, sustained for at least a decade"
How does he intend to measure that growth? GDP? I understand AI optimists may think we can produce twice as many cars, devices, furniture, but then where are you going to put all those things? When are consumers going to use them ? Are days going to get longer? Where is the energy needed for all those consumer goods coming from?
And about services, when are we going to consume all those services? Are days going to get longer? Will AI produce a new drug to allow us to sleep only 4 hours a day?
"how AI will impact education"
Probably in the same way the Singapore method impacted it. Another fad.