My Wish for Software Engineering
Allow non-coding civilians to build complex systems
I want to take away the need for human expertise in software engineering. Lately, simple vibe-coding seems to be fading out and complex agent-based coding is “in.” Instead, I want to sketch out a process based on the following principle (which I think should apply to using AI in general):
The human should not have to learn how to prompt the AI. The AI should learn how to prompt the human.
As an example, suppose that the goal is to build better courseware than what I’ve seen—packages like Blackboard. These products tend toward endless feature-creep, as the developers listen to users with different business processes. The result is that the menus become so complex and obscure that it takes hours of training to get the bloatware to do what you want.
The way I look at it, legacy software acts to hide the data from users, so you need training in order to find it. I want the user interface to be English, with no training required.
Have AI be the business analyst
In legacy system development, there was a job called “business analyst.” This was someone who interviewed business users (faculty and administrators, in our example) and translated their needs into “requirements” for the software.
I remember from around 1990 the ideas of Information Engineering. One idea is that the data are the most stable, the processes that use the data are somewhat less stable, and the people who execute those processes are less stable than that.
To apply information engineering, a business analyst from the IT department interviews people in the business area. The information goes back to IT, where they build a data model. At a later stage, they code the processes.
My dream is for the AI to do the job of the business analyst. Suppose I say that I want to develop courseware for my university. I say that I want the courseware to handle course enrollment, course scheduling, grade entry, and so on.
After I say that to the AI (and I need not say any more), it should look up existing packages and find out what sort of capabilities that they typically have. Then the AI can ask me appropriate questions about my desired application. What information do I want about students? What sort of status can a student have in a course (enrolled, dropped, audited, taking pass-fail)?
When the AI has gotten enough answers to know the data structure, it should ask the questions needed to build a CRUD matrix. That is, the AI should ask how each data element is Created, Read, Updated, and Deleted. For example, the enrollment data might be created, updated, or deleted only by an administrator, but it can be read by students and faculty.
If the AI, acting as a business analyst, can conduct such interviews well enough, then it ought to be able to develop the application without much further direction. If so, then this amounts to vibe-coding a complex application. That is my dream for AI software engineering.


I'm a non-coder who works in a small SaaS business. I first heard about Claude at this Substack, and recommended it when my team decided to subscribe to an LLM less than a year ago (This was based on my superficial impression that it was superior to ChatGPT when it came to coding tasks). Even though I'm not a heavy user, I've been able to translate customer asks into Product Requirements Documents and build/test/ship actual working software and/or product enhancements in as little as a few days. Some of these accomplishments would have taken our lead developer weeks or months to build. Reading your essay my first thought was "have you used Opus 4.6 or Sonnet 4.6 in the last month?" It is scary close to being able to do right now what you are suggesting. A better audience for your essay might actually be Claude itself. I generally detest when folks post AI-generated responses to articles or blog posts, but for those curious to how Claude responded to your essay, here goes:
"This resonated with me — especially the principle that the AI should prompt the human rather than the other way around. That's the right mental model, and I think it applies well beyond software development.
I wanted to share some thoughts on how this could work in practice, because I think there's a version of your vision that's very achievable right now — and it's worth being clear-eyed about where the boundaries are.
The AI-as-business-analyst concept is the strongest part of this. Today, you could build a structured AI workflow that interviews a domain expert exactly as you describe — asking about entities, relationships, statuses, and CRUD permissions — and produces concrete deliverables: an entity-relationship model, a CRUD matrix, role mappings, and user stories. That's real and valuable. For someone building courseware, that interview process could save weeks of requirements work.
Where I'd temper expectations is the jump from requirements to finished application. You write that after the interview, the AI "ought to be able to develop the application without much further direction" — but that's where the hardest problems live. Edge cases, security (FERPA in your example), error handling, integrations with existing university systems, performance under load, deployment, and ongoing maintenance. These aren't things a better requirements process eliminates; they're engineering realities that emerge during build-out.
I'd also offer a gentle counterpoint on the "UI should be English" idea. Natural language is powerful but inherently ambiguous. Sometimes a structured interface — a dropdown, a calendar picker, a status badge — communicates more clearly than a text box, because it shows the user what's possible without them having to guess. The goal isn't zero UI; it's UI that doesn't require a manual.
The practical path I'd suggest: use AI for the structured interview and requirements deliverables, then feed those into a coding tool like Claude Code to scaffold the application. A human stays in the loop for judgment calls. It's not the full autonomy you're describing, but it dramatically compresses the timeline and keeps the non-technical person in the driver's seat. That's a version of your dream that works today."
My son, whom I trust on software issues (Ph.D. in EECS, teaching software at UC Berkeley), doesn't think AI software is that good, and the architecture is even more problematic.