AI Links, 5/12/2026
Jerusalem Demsas on AI as centralizing technology; Mark McNeilly on holding AI to a higher standard; Steve Newman on the state of AI; Dean Ball on regulating AI
If people are systematically substituting AI answers for the decentralized web of sources they previously consulted, that’s a structural change in information consumption that matters regardless of what individuals tend to use AI for.
…In the post-training process, models are explicitly coached to give answers that align with mainstream expert consensus to hedge and defer to authoritative-sounding framings and avoid fringe positions.
Others have speculated similarly. That is, the fragmented, chaotic information environment that has emerged in the last decade might be changing as people rely more on AI. We may get, for better or worse, a more centrally curated view of things.
When AI hallucinates, we say, “See. You can’t trust AI.” Yet we don’t do that with humans. When people we trust make a rare mistake, we excuse it with, “Well, everyone is human.” By expecting perfection from AI but giving humans a pass, we are distorting how we should think about AI’s value, its risk and its adoption.
This is also true for self-driving cars. Instead of asking, “who causes more accidents, self-driving cars or human drivers?” people ask, “Do self-driving cars cause any accidents?”
Perhaps there is some perfectly rational reason to demand perfection from AI. But I suspect that it is not rational. Perhaps people are thinking that humans have variable outcomes, but their heuristic about machines is that they always have the same outcome. Ergo, if a machine makes one mistake, it will make an infinite amount of mistakes.
But I am open to other explanations.
OpenAI revenue is absolutely exploding. Reportedly, their annual run rate quadrupled from $5.5B to $21.4B over the course of 2025, and then reached $25B in February.
…Anthropic’s revenue rate is now higher than McDonald’s, Paramount, Mastercard, Southwest Airlines, or Charles Schwab. Another tripling would put them in the neighborhood of Disney or Tesla. And that $30B figure (taken from an Anthropic announcement dated April 6) may already be out of date: As of April 24, the highly respected SemiAnalysis newsletter estimates Anthropic’s revenue rate at $40B/year.
After looking at recent developments, particularly as they concern recursive self-improvement, he concludes,
the current, disorienting rate of change is the slowest we’re ever likely to see.
I like to say that, in terms of calculus, the first derivative of models’ quality with respect to time is positive. The second derivative is also positive. And the third derivative is probably positive as well.
Using ordinary language, what I mean is that the models are getting better. They are getting better at getting better. And they are probably getting better at getting better at getting better.
And this brings me to the one niche of AI regulation that I do affirmatively support today: the management of potential catastrophic risks from AI by the state.
…That’s why I support the AI regulation I support, which, in brief summary, involves the creation of private institutions to sit between the state and the frontier labs precisely so that they can mediate between the inevitable power-seeking impulses of the state and the private business of the frontier AI industry.
Pointer from Tyler Cowen.
Catastrophic risks come from bad humans. AI may enhance bad humans, but I would not frame this as “catastrophic risks from AI.”
The best way that I can see to fend off catastrophic risks from humans is surveillance. David Brin, in The Transparent Society, published in 1998, warned that surveillance is a technological imperative. Governments are going view it as necessary. Brin thought that the best that we could hope for is “mutually assured surveillance.” I assessed his book many years ago.
Like Dean Ball, I am inclined toward an institutional answer. Some institution needs to serve as a check on the FBI of the NSA or whoever is doing the surveillance. The Surveillance Auditor needs to be able to probe whether the FBI is making good use of surveillance tools while not abusing them.
I recognize that the Surveillance Auditor is not a perfect solution. But it might be an example of a second-worst solution, with the others tied for worst.
substacks referenced above: @
@





We usually forgive people for providing bad information because we have the expectation they've done their best to sift through the information readily available to them before formulating an answer, and we usually adjust our expectations for the situation we put them in. If I randomly ask my wife when her next appointment is I'll more than likely be satisfied with a vague answer like 'next month' or even a wrong answer like 'next Tuesday' when it's actually on Wednesday but if she's looking at her calendar I would expect a precise and correct answer. I'd say we're using a similar heuristic for the accuracy of AI answers but our expectations are far higher because we tend to imagine that AI is sifting through mountains of information looking for the nugget that we want, not acting like a giant random number generator creating an answer that we'll find most pleasing.
My experience with LLMs is that they often provide bad - or, at least, misleading - answers at first. If you know enough to push back with facts and logic they’ll provide much better results. The problem is knowing enough to be able to push back or at least to ask clarifying questions. I can do that in some areas, but not at all in others.