AI Links, 3/8/2026
Steve Newman on coding with agents; Tim B. Lee on the Anthropic-Pentagon dustup; Noah Smith says that AGI is here now; Dan Williams on AI vs. Social Media
agents are comparatively weak at high-level decision making, but they make execution cheap. So sometimes, instead of trying to choose the right path, you can just tell the agent to explore every path.
…Don’t ask AI to help you make a design decision. Just have it pick six options, code all six, and see which ones came out best.
The set of best practices for software development gets turned inside-out by AI.
And he gets around to defining the term.
People use the term “agent” pretty loosely. The core idea for me is a system that pursues a goal rather than following a script.
And this seems like critical advice:
the key is putting the agent in a position to check its own work. The agent’s strength isn’t flawless execution, it’s the speed and stamina to keep plugging away. But it doesn’t necessarily realize this – its instinct is to constantly ask for your approval. You have to be very explicit in instructing it what constitutes a successful outcome.
You have to be able to articulate what the software must do, what it must not do, and what might seem nice to have but isn’t necessary.
ultimately, I don’t think any contract was going to prevent the government from misusing AI. That’s going to take oversight — and eventually legislation — from Congress. We need ground rules that apply to all government use of AI, regardless of whose models are used.
I do not think it is good for officials to declare Anthropic a “supply chain risk” because of a contract dispute. But I also do not thing it is good for tech companies to tell our military what it can and cannot do. It reminds me of the 1950s joke (I think it came from Mort Sahl) about the draftee who wants to tell the government, “I refuse to fight against the following countries—.”
The institutions of our government are supposed to cover the issues involved. If the Anthropic folks are worried that the institutions will produce a bad outcome, then they need to try to fix the institutions, not take it upon themselves to be a replacement for those institutions. Your business contract should be about business terms, not your policy preferences.
Indeed, even Dario Amodei believes that contractual agreements are only a stopgap solution to preventing abuse of AI models.
“In the long run, I actually do believe that it is Congress’s job,” Amodei said in a Saturday interview on CBS.
I bet that if AI had A) permanent autonomy and long-term memory, B) highly capable robots, and C) end-to-end automation of the AI production chain, it could defeat humans and take control of Earth today. I might be wrong about that, but if so, I doubt I’ll be wrong three or four years from now.
One of the most interesting thing about the latest developments in AI is that we are continually surprised by the results. If you make a pronouncement of the form “computers can do X, but they cannot do Y,” make sure to give an expiration date.
Consider a topic: climate change, vaccines, immigration, crime, tariffs, wealth inequality, the Epstein files, whatever happens to be in the news. Fire up one of our leading large language models (LLMs)—ChatGPT, Gemini, Claude, even Grok—and ask for information about it. Now compare the response with the information you can find about the topic by scrolling on a major social media platform.
Even better, find a political take currently going viral on one of these platforms and ask an LLM to evaluate it.
If you do either of these things, I suspect that it will quickly become clear that the LLM’s responses are generally much more accurate, evidence-based, and in line with expert consensus than what you get from most social media posts.
That sets us up for an interesting collision, doesn’t it? Will people with outlier views modify their beliefs, or will they just reject AI?
substacks referenced above: @
@
@
@






Re: AI vs. Social Media
This analysis can easily (and sadly) be extended to incorporate AI vs. the legacy media.
For example, during the recent ICE shootings in Minneapolis, I had far more productive, dispassionate and enlightening conversations with Gemini than anything I viewed from the coverage and conversations on either CNN or Fox News. It is very difficult to do thoughtful analysis without ideological bias entering the equation.
With AI, it feels more like the rider is in control as opposed to the elephant.
"LLM’s responses are generally much more accurate, evidence-based, and in line with expert consensus than what you get from most social media posts."
Social media posts??? Isn't that obvious? I'd think it would even be better than most mainstream media news items.