Perspective on AI
My essay in National Affairs
This essay in National Affairs lays out my current perspective on AI.
First, I think of AI not as artificial intelligence but as the next computer interface.
large language models (LLMs) — AI models such as OpenAI's ChatGPT or Anthropic's Claude that are trained on enormous datasets of text and code — represent not AI per se, but rather the latest step in computing's steady march toward more natural human-machine interaction.
Second, I believe that computers can be creative and assist with human creativity.
The secret lies in what scientists call "the adjacent possible" — the realm of what's just within reach, the next logical combinations or steps that become conceivable once certain building blocks are in place. Human creativity has always operated in this space.
…The key to success in this new paradigm lies in what I call "meta-instruction" — the ability to articulate how the author is crafting the artifact. For example, an author might prompt AI with the phrase, "this character lacks self-awareness, so some of her own behavior comes to her as a surprise." Meta-instruction can guide an AI to produce work that matches the author's style and standards. An average writer with excellent meta-instruction skills can become spectacularly productive, while a talented writer who cannot articulate how he writes might struggle to achieve useful results from AI systems.
As I see it, the economic impact of AI depends on how well it does in the sectors of education and health care.
Health care and education are thus ripe for disruption, and early indicators suggest that LLMs might finally provide the productivity breakthrough that these sectors have long resisted.
…But skeptics can point to the adverse impacts of smart phones or the failures of Zoom-based education during the pandemic and question whether technology represents the solution or the problem for human learning.
Finally, I see four hurdles that AI must overcome in order to produce extreme radical change.
Perhaps the most significant difficulty is that these barriers interact with each other in complex ways. True AGI would need to overcome all of them simultaneously — adapting to human organizations, operating reliably in the physical world, handling complicated long-term projects, and maintaining personalized relationships with individual users. The compound problem of solving all these challenges together might be greater than the sum of the individual parts.
Overall, this essay provides a fairly complete snapshot of my perspective on AI.


Looks like the link is broken. This appears to be the correct one: https://www.nationalaffairs.com/publications/detail/ai-era-computing
Thanks for a reasonable, balanced discussion of the issues.
As to this: "A good writer who cannot explain his process will be frustrated by models' inability to meet his own standards of composition. An average writer who can give good meta-instructions will be spectacularly more productive."
It seems more likely that those who are really good at writing or any other task already understand and thus can explain their processes, whereas those who are average cannot. The implication is that those of middling insight and intelligence will be displaced from their jobs as writers, programmers, graphic designers, researchers, radiologists, etc.