LLM Links
Ethan Mollick on using LLMs to speed research; Christopher Mims calls bubble; Leopold Aschenbrenner sees it...differently; Tim B. Lee on Substack's support chatbot
I have had papers that took nearly a decade from when I first started working on them until they were published in a journal. Top quality journals are built for this pace, and so are very ill-prepared for the flood of academic articles that AI is unleashing.
He talks about using LLMs to speed up the process, making it easier to write and review papers. But I think that the whole academic journal system is ready to be junked, with or without AI. Start from scratch to design new systems for sharing knowledge and evaluating researchers and their research. The problem is that professors who have succeeded under the current system would feel threatened by any change.
The rate of improvement for AIs is slowing, and there appear to be fewer applications than originally imagined for even the most capable of them. It is wildly expensive to build and run AI.
…spending on AI is probably getting ahead of itself in a way we last saw during the fiber-optic boom of the late 1990s—a boom that led to some of the biggest crashes of the first dot-com bubble.
I remain much more optimistic about LLMs. I just think that the timeline for getting widely-used applications is years, not months. Here are some ways I think that LLMs are already capable of changing our lives, but the actual implementations are going to take time.
I think that we will see a revolutionary increase in the deployment of robots within five years. I would bet more on specific-purpose robots than general humanoid robots, but I think we will see both.
I think that we will see LLM tutors/coaches within three years. But the Null Hypothesis warns that their impact on learning could be modest.
I think that over the next ten years, the PC form factor will fade out. Instead, LLMs will be embedded in wearables and in things that surround us.
Before long, the world will wake up. But right now, there are perhaps a few hundred people, most of them in San Francisco and the AI labs, that have situational awareness. Through whatever peculiar forces of fate, I have found myself amongst them. A few years ago, these people were derided as crazy—but they trusted the trendlines, which allowed them to correctly predict the AI advances of the past few years.
…AI progress won’t stop at human-level. Hundreds of millions of AGIs could automate AI research, compressing a decade of algorithmic progress (5+ OOMs) into ≤1 year. We would rapidly go from human-level to vastly superhuman AI systems. The power—and the peril—of superintelligence would be dramatic.
…I make the following claim: it is strikingly plausible that by 2027, models will be able to do the work of an AI researcher/engineer. That doesn’t require believing in sci-fi; it just requires believing in straight lines on a graph.
Pointer from Tyler Cowen. OOMs = orders of magnitude. “Straight lines” means exponential growth continues with the same exponent. If instead you expect only linear growth from now on, you should be siding with Mims.
An off-the-shelf language model knows a lot of general facts about the world, but it knows little to nothing about Substack’s software or users. Decagon uses a technique called retrieval augmented generation (RAG) to provide this kind of knowledge to a large language model, enabling it to serve as Substack’s customer support chatbot.
…In certain limited situations, the chatbot can take actions on the user’s behalf. For example, readers can ask the chatbot to cancel their paid subscription to a newsletter.
Don’t try doing that!
To me, customer support seems like the lowest hanging fruit out there for these models. If it takes more than a year to get picked, then you have an idea of how badly the adoption of AI will lag behind its capabilities.
Substacks referenced above:
@
@
"We would rapidly go from human-level to vastly superhuman AI systems."
What does this mean?
Do we compare to an IQ of 100? Is it an IQ of 150? 180? 300? What does an IQ of 300 even mean?
It seems that AI's intelligence is already vastly superior in many ways. The big question is whether it will stay like Raymond in Rainman or overcome its gaps.
I will stick my my metric- I will start believing artificial general intelligence when one of these AIs solves a math problem that hasn't already been solved by human beings, and I am talking about a mathematical proof, not a brute calculation.