Anthony Downs on the Alignment Problem
the challenges with prompt engineering and hallucinations in organizations
Note: Tonight at 6 PM New York time, Moses Sternstein and I are planning to go on substack live. He wants to talk about Warren Buffett’s idea of “import certificates” to correct the trade balance. And I want to talk about how to look at financial markets from the perspective that I call “total paranoia.” And we’ll see what we actually end up talking about.
Anthony Downs’ Inside Bureaucracy was published in 1967. From page 144:
Control processes in bureaus are dominated by the need for a small group of men (top-level officials) to economize on information in order to appraise and redirect the efforts of a very much larger group of men (lower-level officials).
Note how dated those sentences seem (“men”).
Downs provides a simple model (for which he credits Gordon Tullock) of the role of middle management. I can illustrate it using a military problem as a metaphor for the challenges faced by a corporate CEO or the head of a large government agency.
Suppose that you are the general in charge of fending off a potential enemy attack. To do so, you need to process information coming from the foot soldiers about enemy activity, and you need to send directives to those foot soldiers.
Your foot soldiers send too many messages for you to process about the timing and location of an imminent attack. Moreover, these messages include a lot of “false positives” that incorrectly point to a pending attack. 1
To manage your information problem, you set up a communication system in which messages flow up the chain of command. The information from foot soldiers is evaluated by corporals, who decide what gets passed on to sergeants, who decide what gets passed on to lieutenants, and so on, until a legible set of information gets passed to you.
Similarly, when it comes to directives, it is not possible for you to give detailed instructions to every foot soldier. Instead, you give general directives to your direct reports, who then translate these general directives into more specific directives to officers underneath them, until finally the foot soldiers receive their orders.
Mismanagement and the Alignment Problem
Students of World War I or the Civil War know that there were many battles that were lost in part due to mismanagement. Gallipoli, Gettysburg. We can understand mismanagement from the perspective of the story told above about the chain of command.
In an organization, information has to flow up, and directives have to flow down. In each directions, distortions occur. With AI in mind, call this the alignment problem.
Using AI as a metaphor, we can say that when the information that finally makes it to top management is bad, the organization hallucinates. And when management directives do not get carried out properly, we can say that there was a failure of prompt engineering.
As information gets passed up the chain of command, at each step it is compressed. This makes it legible, but it provides only an imperfect picture.
Also, those down in the chain of command have their own interests and biases. They decide what they want their superiors to know. What they pass up the chain may omit crucial facts, be misleading, or could be downright false. Organizational hallucination results.2
When directives move down the chain of command, they also can suffer from distortions. At each step down the chain of command, some understanding is lost about the overall strategic objective and the parallel actions to be taken by other units. Moreover, people in the middle of the chain of command have their own objectives and perspectives, so that the directives they pass along are not necessarily congruent with the goals of the top leaders. Organizations suffer from imperfect prompt engineering.3
A Simple but Powerful Model
This model of an organization as being subject to information being distorted as it flows upward and directives being distorted as they flow downward seems simple and intuitively obvious. But it is a powerful model nonetheless.
These distortions help to explain why organizations can lose effectiveness as they become large. The more layers of management that are needed, the more likely it is that information will be distorted by the time it reaches the top and that directives will be poorly followed by the time they reach the bottom.
A lot of management practices are attempts to overcome these distortions. Routine reports, internal monitors, staff meetings, and circulating memoranda are all practices that would be wasteful and unnecessary if there were perfectly frictionless bottom-up and top-down communication.
Think of AI as Middle Management
You can think of giving a prompt to AI as giving a directive in an organization. You have to worry about whether the AI will carry out your directive faithfully.
You can think of the output from the AI as the information that flowed up to you from the organization. You have to worry about how distorted this information could be.
Looking at it this way, the shortcomings and biases that an AI might have are similar in kind to those that middle management has. The difference is a matter of degree.
In a real organization, could AI perform some of the functions of middle management? Could the low-level employees provide information to the AI, with AI deciding how to summarize this information for the CEO?
In giving directives, could the CEO use AI to bypass middle management? Could the CEO give a general directive to the AI, such as “launch a new credit card,” and have the AI turn this into a set of instructions that it either executes itself or passes along to the employees who will be involved in implementation?
Users of AI face an alignment problem. The AI might not do what you want. But the point of the Downs model is that your middle management also poses an alignment problem. If we cannot attain perfect alignment of an AI, then keep in mind that we cannot attain perfect alignment of middle management, either.
It could turn out that AI is relatively better aligned, or AI in combination with ordinary middle management is better aligned than middle management alone. In that case, then top management can benefit from incorporating AI for some middle management purposes.
On p. 190, Downs writes,
As Roberta Wohlsetter argued in her study of the Pearl Harbor attack, fragmentation of perception inevitably produces an enormous amount of “noise” in the organization’s communications networks. The officials at the bottom must be instructed to report all potentially dangerous situations immediately so the organization can have as much advanced warning as possible. Their preoccupation with their specialties and their desire to insure against the worst possible outcomes, plus other biases, all cause them to transmit signals with a degree of urgency that in most cases proves exaggerated after the fact. These overly urgent signals make it extremely difficult to tell in advance which alarms will prove warranted and which will not.
Organizational hallucination is a frequent occurrence in wartime. At Gallipoli, the British commanders held mistaken ideas about the strengths and weaknesses of Turkish defenses. At Gettysburg, Confederate General Lee was misinformed about the disposition of the Union forces, in part because of flawed intelligence received from his cavalry commander.
At Gallipoli, the British general in the field ignored orders to advance quickly inland. This left his troops badly exposed on the beach. At Gettysburg, Confederate General Lee issued a soft order to General Early to take Cemetary Hill. But Early chose not to attack this important redoubt, allowing Union forces to shore up its defenses.


Following this analogy, it makes it clear that there will be a huge difference between an AI engine that is both trained and controlled by a third party and one that is trained and controlled by the organization itself.
Take your military examples and consider the implications of having all the mid-level officers be mercenaries hired from a company based in another country. Even if they're not actively trying to subvert the information flow, it's going to be distorted because it's not trained specifically for how that army's leadership wants information and how its foot soldiers best get their orders.
If I own the engine, I can do Quality Assurance, make corrections, retrain it, etc. If I let the major AI companies own the engine, then it's a black box that I need to be very careful about trusting.
This is a key point about AI. I get a lot of push back about using it in hiring. “What if it’s biased?” To which I reply, “And the people doing it now aren’t?”