Experts who warn that artificial intelligence poses catastrophic risks on par with nuclear annihilation ignore the gradual, diffused nature of technological development. As I argued in my 2008 book, The Venturesome Economy, transformative technologies – from steam engines, airplanes, computers, mobile telephony, and the internet to antibiotics and mRNA vaccines – evolve through a protracted, massively multiplayer game that defies top-down command and control.
Another takeaway from Bhide’s earlier book that I think is relevant here is that consumers and businesses that adopt technology are important drivers of innovation. In order to make technology practical, users adapt it and steer the direction of technical progress.
What we are seeing right now in AI is very accelerated development of technical capabilities of machine learning. But application development is not keeping pace. Bhide writes,
AI spans disparate techniques – such as machine learning, pattern recognition, and natural language processing – and has wide-ranging applications. Their common feature is mainly aspirational – to go beyond mere calculation to more speculative yet useful inferences and interpretations.
I added emphasis to the word “aspirational.” The new techniques aspire to achieve definitive solutions to problems that previously were only imperfectly and partially solved. But will this occur?
Will the new AI finally be able to make real-time language translation possible? I would bet “yes.” Will it be able to overcome what I call the Null Hypothesis and transform education by enabling personal tutoring, realizing the fictional Illustrated Primer of Neal Stephenson’s The Diamond Age? Of that I am cautiously optimistic but much less certain.
Bhide writes,
AI developers have been at work for more than seven decades, quietly inserting AI into everything from digital cameras and scanners to smartphones, automatic-braking and fuel-injection systems in cars, special effects in movies, Google searches, digital communications, and social-media platforms. And, as with other technological advances, AI has long been put to military and criminal uses.
Yet AI advances have been gradual and uncertain….The inaccuracy of 16 generations of professional dictation software (I bought the first in 1997) has repeatedly frustrated me.
Again, the high rate of technical improvement in AI over the past few years is undeniable. If software development were a sport, we would be witnessing new records being broken every week.
But the applications are way behind. Part of this is due to the Wayne Gretzky Principle. He famously said that in hockey you don’t skate to where the puck is—you skate to where it is going. If you are trying to build an application in the field of medical diagnostics, do you base your specifications on ChatGPT-4 or on what you expect such models will look like in 2025? The Gretzky Principle says to choose the latter, but that means that you will not have a product this year.
Most innovations do not immediately arrive in read-to-use form. Bhide writes,
As economic historian Nathan Rosenberg and many others have shown, transformative technologies do not suddenly appear out of the blue. Instead, meaningful advances require discovering and gradually overcoming many unanticipated problems.
In his own work, for example, Bhide tried to use the latest Large Language Models to help in writing his latest book.
whereas I found my 1990s Google searches to be invaluable timesavers, checking the accuracy of LLM responses made them productivity killers. Relying on them to help edit and illustrate my manuscript was also a waste of time. These experiences make me shudder to think about the buggy LLM-generated software being unleashed on the world.
I have said that the superpower of LLMs is that they make it easy to communicate with computers. But overcoming the models’ weaknesses, notably “hallucination,” will take time. And a lot of application-specific reinforcement learning may be required.
Once developers iron out the applications, they will find that innovation diffuses slowly in the world of business and consumers. Economic historian Paul David found that the this process took decades in the case of electric motors.
What does this mean for the existential risk scenarios? If existential risk follows from technological innovation alone (as with the atomic bomb), then we should be paying attention to what the leading-edge engineers are achieving—the records that are falling in the sport of AI. But if existential risk will only come from how the technology gets applied, then we need to pay attention to what application developers and consumers are up to, and their process of adapting new technology is slower.
Both Bhide's article and this response are great examples of how much AI discussion is a (mostly accidental) motte-and-bailey fallacy. Because there's no strong definition of "AI" as a moniker, it can apply to almost anything, and by using the same term for two very different concepts, we can all get very confused! On the one hand, machine learning is really cool, allowing computers to (e.g.) generate semi-plausible text (motte), and on the other, we've all seen _The Matrix_ (bailey).
When Bhide "shudder[s] to think about the buggy LLM-generated software being unleashed on the world" because of under-trained developers "writing" code with ChatGPT copypasta, he's referring to the motte, and when he (along with many commenters here and elsewhere) refers to "existential risks to humanity," he's referring to the bailey. ChatGPT is in no way Agent Smith from _The Matrix_. It just has a similar name.
I definitely think that Silicon Valley types overestimate the rate at which mainstream businesses and people will adopt AI. I suspect that the timeframe will be something akin to that of cloud computing.