The product overhang from the generative AI capabilities we have today are absolutely massive: there are so many new things to be built, and completely new application layer paradigms are at the top of the list. That, by extension, is the bridge that will unlock entirely new paradigms of computing. The road to the future needs to be built; it’s exciting to have the sense that the surveying is now complete.
As I read Thompson’s post (and feel free to disagree with my interpretation), he is saying something close to what I have been saying. That is, the significance of Large Language Models is that they represent the latest development of the human-computer interface. They are a computing paradigm that allows us to communicate with a computer using natural language. Instead of having to use a mouse/pointer to communicate with a PC, or a finger-swipe to communicate with a smart phone, we can speak with the computer as if it were another human. In my view, we should not judge LLMs as encyclopedias or research assistants but as ways to talk to computers.
I believe Thompson is saying that a new way to communicate with a computer tends to align with new hardware. In the case of LLMs, he predicts that the new hardware will be wearables. In the future, when we use smart watches or smart glasses, we will communicate with them using natural language as the user interface (UI).
I think, is the future: the exact UI you need — and nothing more — exactly when you need it, and at no time else. This specific example was, of course, programmed deterministically, but you can imagine a future where the glasses are smart enough to generate UI on the fly based on the context of not just your request, but also your broader surroundings and state.
This is where you start to see the bridge: what I am describing is an application of generative AI, specifically to on-demand UI interfaces. It’s also an application that you can imagine being useful on devices that already exist. A watch application, for example, would be much more usable if, instead of trying to navigate by touch like a small iPhone, it could simply show you the exact choices you need to make at a specific moment in time.
He calls the new AI a “bridge to wearables.” I can see this. But I also think that they will be a bridge to robots. A natural-language man-machine interface creates many opportunities to make robots easier to train and to use.
I share Thompson’s optimistic long-term outlook for wearables and for the metaverse, but I think that we will see the applications to robots materialize sooner.
Yesterday Marginal Revolution linked to a Twitter account where the pinned tweet said "English is the hot new programming language.". https://x.com/karpathy/status/1617979122625712128?t=b0d4eNo3V1-vXFNDDcyf0A&s=19
Further down in the account's posts was a piece on AI workplace monitoring that said "
AI monitoring software now flags you if you type slower than coworkers, take >30sec breaks, or checks notes have a consistent Mon-Thu but slightly different Friday.
Bonus: It's collecting your workflow data to help automate your job away."
So, random evidence elsewhere would seem to support both your contention that the natural language computer interface ability of AI is big as well as the notion that wearable machines/robots are the future: workers are going to be physically connected to their laptops to satisfy AI enabled performance monitoring.
The significance of these systems is not so much that they represent the latest development of the human-computer interface, but that people will use that interface to direct, "Watch what this guy does very closely to generate his replacement." The Secret To Our Success is now the secret to THEIR success.