AI Links, 2/20/2026
Noah Smith on human disempowerment; Andrey Mir on time compression; Michael Magoon on AI's blind spot; Ethan Mollick on the agent paradigm
I think the arrival of the European settlers in North America is not a terrible analogy for what humans face now with the arrival of our own AI creations. I’m not claiming that Europeans were individually smarter than Native Americans; instead, the European system was just much more capable of getting things done. The Europeans had writing, corporations, shipbuilding industries, advanced metallurgy, organized bureaucracies, and a ton of other things that were not included in Native American culture. Native Americans quickly learned to use guns and horses, but their overall system was unable to adapt to match what the Europeans had.
Thus, the day that Europeans arrived on North American shores, the Native Americans of what is now the United States lost control of their destiny—forever. The Native Americans simply lost the power to decide what their future would look like.
…This is a future of profound human disempowerment.
Meanwhile, I could ask whether humans who refuse to embrace AI will have much less power over their future than will humans who become AI-native.
Accelerating AI is now compressing human history so fast that those usual suspects of global anxiety won’t have time to develop into the catastrophes we fear. The timeline is collapsing so rapidly that we’ll hit a breaking point before any of those problems can play out to their end. The worst part is that we don’t even see it coming.
…changes have always arrived incrementally, even those caused by television. Any turbulence was expected to settle, followed by a period of adaptation and relative calm. This incremental pattern worked fine on the flat, slowly rising part of the exponential curve, and it became our habitual way of understanding history.
For example, in the past we have never run out of jobs for humans to do. But that may not be a guide to the future. But the challenge of living with this acceleration is much harder than just the jobs issue.
our current AI is almost exclusively trained on internet sources and published books and academic articles are almost completely missing. This means that current AI, as impressive as it is, is missing the vast majority of human knowledge that currently exists in books and academic articles.
Whether we like it or not, we are a moving from a world of decentralized information storage to a world of centralized information storage. Will AI be broadly representative of thousands of years of accumulated knowledge or will it be a biased and much smaller subset of knowledge currently on the internet?
I had not noticed that the AI’s were not well read in academic sources. Is that really true?
Until a few months ago, for the vast majority of people, “using AI” meant talking to a chatbot in a back-and-forth conversation. But over the past few months, it has become practical to use AI as an agent: you can assign them to a task and they do them, using tools as appropriate. Because of this change, you have to consider three things when deciding what AI to use: Models, Apps, and Harnesses…
Until recently, you didn’t have to know this. The model was the product, the app was the website, and the harness was minimal. You typed, it responded, you typed again. Now the same model can behave very differently depending on what harness it’s operating in. Claude Opus 4.6 talking to you in a chat window is a very different experience from Claude Opus 4.6 operating inside Claude Code
For me personally, they might be a step backward? That is, I want to act as if I don’t have to understand gits or repos or other tools of the software engineering trade. I want to just describe my project goals and have the AI do the rest.
I have not been able to ride the latest wave. I am still trying to just use Claude for vibe-coding. For that purpose, it works better now. But I have not used Claude Code or Claude Cowork. Note that The Zvi writes,
There is a huge divide between those who have used Claude Code or Codex, and those who have not. The people who have not, which alas includes most of our civilization’s biggest decision makers, basically have no idea what is happening at this point.
I am not one of civilization’s biggest decision makers. Otherwise, guilty.
substacks referenced above: @
@
@
@
@






"I had not noticed that the AI’s were not well read in academic sources. Is that really true?"
It's not true - what a bizarre flub for Magoon to make. As for books, fully two years ago now, Anthropic poached Turvey from Google Books for "Project Panama" in order to get "all the books in the world" digitized to become training data. Which they did.
This turned into a faux or quasi-scandal last year when the facts came out in "Bartz v Anthropic" (or "Authors v Anthropic"), not necessarily because of the legal question of the limits of "fair use", but because of the irreversibly destructive manner in which Anthropic scanned the copies of millions of books it had accumulated.
For Magoon to say now, "... current AI, as impressive as it is, is missing the vast majority of human knowledge that currently exists in books and academic articles ... " is thus completely false nonsense which anyone can easily verify for themselves in 10 seconds. Bizarre.
It is interesting to note that the top scientific research LLMs besides Google (co-scientist building on its vast academic repository of info from Google Scholar, Google Patents, etc.), like DeepSeek and Qwen3, are Chinese companies, who, ahem, aren't exactly the most scrupulous when it comes to adhering to copyright laws or worrying about foreign judgments about it. I think Anthropic is also pretty good at the stuff, but there may be harder legal issues for them to overcome in terms of being fully forthright about it due to the much stronger enforcement of usage rules and copyright in the academic publication domain.
Noah Smith wrote: “The Native Americans simply lost the power to decide what their future would look like.” If we interpret this sentence distributively—as telling us about each individual Indian—we must consider it a silly sentence. The individual Indian never had much power to decide the future, and even after the European invasion he typically still had some such power—at most, *slightly* diminished. More charitably, we may interpret the author as speaking collectively, pretending that Indians (never mind in what geographical region, exactly) had a single mind, which had beliefs and desires and, most of all, will, and that before the coming of the Europeans this entity was exercising considerable power to determine its future (i.e., that of its members), but that afterwards it no longer had such power.
I mildly object to this anthropomorphizing of the collective entity; in particular, the analogy between an individual’s *determining his future*, by taking actions aimed at desired outcomes, and what the collection of Indians was “doing” is quite weak. Furthermore, the rhetorical force of Smith’s passage depends on our having the same sort of concern for the collective entity that we have for each individual person, and this is wrong. Individuals matter inherently; collections do not.