AI Links, 1/29/2026
Steve Newman on new approaches to software development; The Zvi on Claude Code; Ethan Mollick on when to use AI; Dario Amodei on AI risks
AIs struggle (for now) with large projects, but they can drive the cost of small projects to near zero.
This creates opportunities to replace mass-market software (mega projects) with bespoke applications tailored to the needs of an individual or team.
These smaller projects make “vibe coding” feasible – AIs can write and review all the code, and people just need to tell the AI what to do.
Twenty-five years ago, I had nothing good to say about corporations relying on proprietary solutions to problems that off-the-shelf software could solve. But now it’s the off-the-shelf software that I think has to go. The problems are:
trying to be the solution for everyone leads to user interfaces that are overly complex
the legacy approach to providing a user interface to data relies on menus and forms. I think that many users would benefit instead from a natural-language interface.
For the user, a natural-language interface can reduce the need to extract data and then rework it. It can empower the user to obtain what he wants, not just settle for what the menu offers him.
Few have properly updated for this sentence: ‘Claude Cowork was built in 1.5 weeks with Claude Code.’
Nabeel S. Qureshi: I don’t even see how you can be an AI ‘skeptic’ anymore when the *current* AI, right in front of us, is so good, e.g. see Claude Cowork being written by Claude Code in 1.5 weeks.
It’s over, the skeptics were wrong.
Software developers write Product Requirements Documents. Film directors hand off shot lists. Architects create design intent documents. The Marines use Five Paragraph Orders (situation, mission, execution, administration, command). Consultants scope engagements with detailed deliverable specs. All of these documents work remarkably well as AI prompts for this new world of agentic work (and the AI can handle many pages of instructions at a time). The reason you can use so many formats to instruct AI is that all of these are really the same thing: attempts to get what’s in one person’s head into someone else’s actions.
His point is that using AI effectively requires the management skill of being able to articulate clearly a project’s goals, context, and constraints. He mentions the skill of knowing what an AI can do. I think this could use more emphasis. Sometimes a simple prompt will work, sometimes a more complex prompt is needed, and sometimes a task is beyond the (current) capability of an AI. Knowing the difference is important.
suppose a literal “country of geniuses” were to materialize somewhere in the world in ~2027. Imagine, say, 50 million people
…Could existing rogue actors who want to cause destruction (such as terrorists) use or manipulate some of the people in the new country to make themselves much more effective, greatly amplifying the scale of destruction?
I agree with his priorities for what to worry about. That is, I think that rogue humans using AI will threaten us long before rogue AI itself threatens us.
substacks referenced above:
@
@
@




In addition to many SaaS applications getting squeezed by their customers building their own solutions, I think many applications themselves will turn into a set of API's that are manipulated by AI's as the primary mode of use.
I get the impression a lot of software will morph into a set of semi-structured templates that an AI sherpa will manipulate for you to get the results you want, vs the traditional UI's we currently have.
"His point is that using AI effectively requires the management skill of being able to articulate clearly a project’s goals, context, and constraints." This is what I call "Project Elocution" - the number one abstract skill in the second quarter of the 21st century.