AI links, 3/6/2026
Anthropic on best practices for using AI; Hollis Robbins explains an LLM trope; Ian Leslie on FOO camp; Megan McArdle on projection of AI fears
To quantify AI fluency, we use the 4D AI Fluency Framework, developed by Professors Rick Dakan and Joseph Feller in collaboration with Anthropic. This framework helps us define 24 specific behaviors that we take to exemplify safe and effective human-AI collaboration.
Of these 24 behaviors, 11 …) are directly observable when humans interact with Claude on Claude.ai or Claude Code. The other 13 (including things like being honest about AI’s role in work, or considering the consequences of sharing AI-generated output), happen outside Claude.ai’s chat interface, so they’re much harder for us to track.
Pointer from Zvi Mowshowitz. Among the 11 are “refines and iterates,” “clarifies goal,” and “provides examples.” I agree that these are good practices for using AI, as are they other practices in the 24. I think that at some point everyone could benefit from a short course that instills these practices. An obvious level at which to teach such a course would be high school or college.
The models see that hot and cold are mathematically close. They do not inherently compute the oppositeness relation. One way to understand the “not X but Y” construction is as a workaround for the model’s inability to compute opposition the way humans do. By explicitly stating both the rejected term and the replacement, the model externalizes onto the page an operation it cannot perform internally.
She says that the LLM is wasting her own brain’s processing power.
The “not X but Y” construction then asks you to spend another one to four seconds reading a clause that delivers an answer you computed in under half a second.
And it is also insults your intelligence.
The construction implies the reader was holding X and needed correction. When you, the reader, were not holding X, you feel like you’re being talked down to.
Just say Y and be done with it.
The vibe-coding apps are not, in fact, easy to use. You need technical domain knowledge and intuitions rooted in software engineering - and a lot of LLM usage - to get the best out of them. They don’t replace expertise, they amplify it.
This seems to be true and important. The new “agents” seem to me to have dropped the pretense of vibe-coding altogether. They are just tools for advanced software engineers.
Leslie is reporting from FOO camp (Friends of O’Reilly). I was invited once, really liked it, but was never asked back. Leslie linked to Helen Andrews' report on this year’s version.
the people who currently face disruption are influential elites whose private worries shape public discourse — and can move markets when they get out of hand. Remember that the next time an ominous prophecy circulates — or really,whenever you read anything about AI, including this column. I try not to mistake my problems for those of humanity, but no one ever succeeds at fully untangling the two.
She suggests that the fear of an economic collapse caused by AI is a case of projection. Individuals who see their own skills devalued are forecasting doom for the rest of us. I see that happening a lot.
substacks referenced above: @
substacks referenced above:
@



I agree with the assessment that vibe-coding is actually more of an advanced tool than a leveling factor...slightly to my bemusement. I keep trying to get my wife and kids interested in using the tools to make something, but they won't take the bait. I'm consistently amazed and delighted with what is now (and increasingly) possible, but it seems like having spent years of my life banging my head against the keyboard in frustration may have been critical in building up a store of fuel for engaging with AI to build software. No matter what I show them--even when I invite them over to the screen and have them cajole Claude or Codex into making them an app for something they care about--they still just kind of...meh...and then go back to more familiar domains. I imagine that it is yet possible that if they catch the right idea they might actually be driven to figure it out, but without the drive...well, it's more or less consistent with my view on education which I think aligns with Arnold's null hypothesis: it all comes down to motivation. If you really care, it's nearly impossible to stop you, and if you really don't it's nearly impossible to help you.
Oh, it's a DIFFERENT Helen Andrews. I was gonna say that doesn't seem like her beat...