Discussion about this post

User's avatar
Joseph E.'s avatar

I'm a non-coder who works in a small SaaS business. I first heard about Claude at this Substack, and recommended it when my team decided to subscribe to an LLM less than a year ago (This was based on my superficial impression that it was superior to ChatGPT when it came to coding tasks). Even though I'm not a heavy user, I've been able to translate customer asks into Product Requirements Documents and build/test/ship actual working software and/or product enhancements in as little as a few days. Some of these accomplishments would have taken our lead developer weeks or months to build. Reading your essay my first thought was "have you used Opus 4.6 or Sonnet 4.6 in the last month?" It is scary close to being able to do right now what you are suggesting. A better audience for your essay might actually be Claude itself. I generally detest when folks post AI-generated responses to articles or blog posts, but for those curious to how Claude responded to your essay, here goes:

"This resonated with me — especially the principle that the AI should prompt the human rather than the other way around. That's the right mental model, and I think it applies well beyond software development.

I wanted to share some thoughts on how this could work in practice, because I think there's a version of your vision that's very achievable right now — and it's worth being clear-eyed about where the boundaries are.

The AI-as-business-analyst concept is the strongest part of this. Today, you could build a structured AI workflow that interviews a domain expert exactly as you describe — asking about entities, relationships, statuses, and CRUD permissions — and produces concrete deliverables: an entity-relationship model, a CRUD matrix, role mappings, and user stories. That's real and valuable. For someone building courseware, that interview process could save weeks of requirements work.

Where I'd temper expectations is the jump from requirements to finished application. You write that after the interview, the AI "ought to be able to develop the application without much further direction" — but that's where the hardest problems live. Edge cases, security (FERPA in your example), error handling, integrations with existing university systems, performance under load, deployment, and ongoing maintenance. These aren't things a better requirements process eliminates; they're engineering realities that emerge during build-out.

I'd also offer a gentle counterpoint on the "UI should be English" idea. Natural language is powerful but inherently ambiguous. Sometimes a structured interface — a dropdown, a calendar picker, a status badge — communicates more clearly than a text box, because it shows the user what's possible without them having to guess. The goal isn't zero UI; it's UI that doesn't require a manual.

The practical path I'd suggest: use AI for the structured interview and requirements deliverables, then feed those into a coding tool like Claude Code to scaffold the application. A human stays in the loop for judgment calls. It's not the full autonomy you're describing, but it dramatically compresses the timeline and keeps the non-technical person in the driver's seat. That's a version of your dream that works today."

Dallas E Weaver's avatar

My son, whom I trust on software issues (Ph.D. in EECS, teaching software at UC Berkeley), doesn't think AI software is that good, and the architecture is even more problematic.

8 more comments...

No posts

Ready for more?