AI Links,4/9/2026
Ethan Mollick on the user interface; Jack Dorsey and Roelof Botha on AI as middle management; Noah Smith on jobs as bundles; Mary Harrington against AI education
I have used Claude Code for everything from making (a small amount of) money to making games, never touching any code at all. I also find Codex incredibly useful as well, with a similar level of capability. These tools are terrific, but they are really built for programmers. They assume you know Python and Git. Their interfaces look like a 1980s computer lab. For the 99% of knowledge workers who are not developers, these powerful AI tools are not optimized for them.
…I suspect the future isn’t one interface to rule them all. It’s AI that generates the right interface for the moment, an agent on your desktop, a chart in a conversation, a custom app to solve a problem. We’re moving from adapting to the AI’s interface to the AI adapting its interface to you.
…My guess is that a lot of the “AI disappointment” people sometimes express comes not from the AI being bad, but from the interfaces being wrong. We built one of the most powerful technologies in recent history and then made people access it by typing into a chat window. That will change soon.
Jack Dorsey and Roelof Boetha write,
In a conventional company, the intelligence is spread throughout the people and the hierarchy routes it. In this model, the intelligence lives in the system. The people are on the edge. The edge is where the action is.
…There is no need for a permanent middle management layer. Everything else the old hierarchy did, the system coordinates, and everyone is empowered, with a role that's much closer to the work and the customer.
Pointer from Rowan Cheung. Recall how Anthony Downs thought of bureaucracy. As I put it,
To manage your information problem, you set up a communication system in which messages flow up the chain of command. The information from foot soldiers is evaluated by corporals, who decide what gets passed on to sergeants, who decide what gets passed on to lieutenants, and so on, until a legible set of information gets passed to you.
Similarly, when it comes to directives, it is not possible for you to give detailed instructions to every foot soldier. Instead, you give general directives to your direct reports, who then translate these general directives into more specific directives to officers underneath them, until finally the foot soldiers receive their orders.
If middle management exists because humans have limited information processing capability, then in theory we can substitute information processing capability for middle management. We will see whether AI can make this work.
Like many economists, Garicano et al. envision a job as a bundle of various tasks. But they also theorize that in some jobs, these tasks are only “weakly bundled” — you don’t really need the same person to do all of those tasks. For these jobs, it would be easy to divide up the tasks between different workers — or between a human and an AI. But in other jobs, the authors assume that the tasks are “strongly bundled” — the same person who does one part of the job has to do the other parts, or the job can’t be done.
My guess is that this distinction will not hold up. That is, if “strictly human” tasks and AI-automatable tasks are strongly bundled, those tasks are destined to become unbundled. Think of the way that the Alpha School unbundles the teacher’s bundle os coaching and instruction. The AI handles instruction, and the “Guide” handles coaching/motivation.
Knowledge can be codified to an extent, but making it your own always requires movement toward knowledge by the learner. Classically, that happens in relationship. To illustrate: in our home, we use Duolingo to support my daughter’s language learning, and there are clearly some benefits to digital tutors of this kind. But from my observation there’s an additional stage to learning, where the material must be metabolised and then applied in a human-to-human context. By definition the robot can’t supply that.
I think that most of what students learn in economics does not get “metabolised.” The majority of them come out of economics classes just as supportive of tariffs and minimum wage laws as they did before.
Economic education aside, my point is that I think it is an open question whether the “additional stage to learning” is better achieved by human teachers or AI.
substacks referenced above: @
@




"... But from my observation there’s an additional stage to learning, where the material must be metabolised and then applied in a human-to-human context. By definition the robot can’t supply that. ... Economic education aside, my point is that I think it is an open question whether the “additional stage to learning” is better achieved by human teachers or AI."
The more basic question, which we steadfastly refuse to ask, is whether for a large number of people in a large number of subjects, that "metabolizing", that "additional stage of learning" is possible at all. Is much of education eternally condemned to be "performative"? Things are done that look like teaching. Things are done that look like learning. But six months later, the student hasn't internalized much of anything.
The typical middle manager in nearly any org these days also plays the role of coach/mentor/career advisor/conflict mediator--doing all of the "HR"-type tasks that humans require in order to be maximally happy and productive. So in a post-middle manager world where information flow and coordination takes place in a system that doesn't involve direct human-to-human connection, those roles will have to got somewhere else--at least as long as humans are involved at all. Not saying that's a bad thing! Most middle managers are not good at all (or sometimes any) of the things that are by default expected of them.