25 Comments
User's avatar
Ed Knight's avatar

Following this analogy, it makes it clear that there will be a huge difference between an AI engine that is both trained and controlled by a third party and one that is trained and controlled by the organization itself.

Take your military examples and consider the implications of having all the mid-level officers be mercenaries hired from a company based in another country. Even if they're not actively trying to subvert the information flow, it's going to be distorted because it's not trained specifically for how that army's leadership wants information and how its foot soldiers best get their orders.

If I own the engine, I can do Quality Assurance, make corrections, retrain it, etc. If I let the major AI companies own the engine, then it's a black box that I need to be very careful about trusting.

Expand full comment
Handle's avatar

There is currently a huge asymmetry between the threshold ability required and cost it takes to throw powerful generic transformer tokenizing tools at giant piles of ordered or categorized data and to apply some continuous refinements to make decently impressive "AIs", on the one hand, and the capability of even tool- augmented human experts to meaningfully scrutinize or do anything remotely approaching 'debugging' of these systems, on the other hand. Even fully-owned systems developed entirely in-house, while they may be more secure against intentionally embedded hidden backdoors or malicious code, are so inherently irreducibly complex that they might as well be black boxes for all anyone involved can then actionably penetrate the details of their workings.

Expand full comment
Doctor Hammer's avatar

Indeed. Many companies struggle to keep their ERP systems working well enough to be used; AI is way beyond the horizon of complexity from there.

Expand full comment
Alan's avatar

This is a key point about AI. I get a lot of push back about using it in hiring. “What if it’s biased?” To which I reply, “And the people doing it now aren’t?”

Expand full comment
Tom Grey's avatar

Great insight about up & down distortion in orgs, and their analogy in ai.

I’m sure there will be a steady hollowing out of middle mgmt as orgs get better at using ai, including learning more about how what they at the top think they’re measuring by their current metrics is not quite what really is being measured. Often what the good middle managers report is what is more decision relevant rather than merely the response to a request.

If ai can perform at the 80 or even 60% quality of current mid managers, that will allow lots of slimming those mid levels, meaning fewer managers each managing more. Like if now the managers can manage 10 each, 5 levels means 100,000. But if it was 20, only 4 levels would be enough for 160,000 employees.

The vast majority of non-management folk don’t need college degrees, and a college credential is used more to screen out those unwilling to jump thru requirements, rather than knowledge that is all taught on the job.

Ai in interviewing is likely to become a big thing. Possibly even in estimating IQ based on interview questions—companies should be allowed to test for IQ or any quality they want.

Expand full comment
Roger Sweeny's avatar

In the last few days, Trump signed an executive order that the federal government will no longer find "disparities" illegal unless there is some specific discrimination. So if AIs sorta kinda test for intelligence and it has a "disparate impact", they may be safe from federal suit for the next 3.7 years--though people will still be able to use disparate impact in a private suit unless Griggs is overruled.

Expand full comment
stu's avatar

There is an assumption in this model that the intermediate layers create faults and make the information and directives worse. Sometimes that happens. There are numerous examples of spectacular failures.

I would argue that more often than causing harm, the intermediate layers weed out bad information and modify directions in ways that make them work better. When these actions occur, they are largely hidden to all but the participants and are rarely as recognizable as Gallipoli or Gettysburg.

Expand full comment
Doctor Hammer's avatar

If it were true that the middle layers add efficiency on net we would expect corporations to grow indefinitely. We see the opposite, however.

Expand full comment
stu's avatar
Apr 29Edited

I can think of at least three reasons that probably doesn't follow from what I wrote:

1 At some point in their life cycle, lot of companies struggle to find new products in which to grow their business. Microsoft has had this problem for much of its life, at least under Ballmer. Apple and Google have had this problem to varying degrees from time to time too. IBM has totally remade itself at least twice, luckily finding a new primary business. No company has perfect success at finding new products and services to expand their business successfully year after year.

2 I said I suspect they help more often than they harm. That doesn't mean the total amount of help is greater than the total amount of harm. Maybe the harms are bigger than the helps. Hard to judge when the helps are more often hidden from public view.

3 Even if the middle layers improve the flow of information and directives, there is still a cost of having all those people.

3a If I'm not mistaken, the middle layers have to grow faster than the company in order to maintain the same supervisor to subordinate ratio.

Expand full comment
Doctor Hammer's avatar

1 is a fair point, but it does not mean that a single firm wouldn’t eventually absorb the entire industry it is in. Even if the products are stagnant there should be only one firm eventually if middle managers offer more benefit than harm.

3a is a mistaken claim as middle management doesn’t have to grow faster than other positions relative to its benefits. It is possible that it could, but it doesn’t have to. I said efficiency on net which implies that the benefit is greater than the cost, but I agree that if we ignore cost of hiring the net benefit might be positive while efficiency is negative when including salaries. The fact that many firms try to flatten hierarchy instead of increasing it for a given number of employees militates against it adding efficiency while ignoring costs, however.

2 is just a bad argument. The question is whether middle managers on whole clarify or obscure meaning and direction, not whether they mostly are ok except for the times they cock everything up.

Expand full comment
stu's avatar

I probably shouldn't respond to that but...

1 it's a little like flipping coins until you get a tails. GE was a pretty good example though not like the ones I gave previously. They kept adding subsidiaries until one blew up and everything unraveled.

4 Or maybe the limit is just it becomes too much for the one person at the top. Lots of reasons not to have one infinitely large company.

2 No, it means that maybe lots of mostly small improvements might not add up to a few big harms. Doesn't change what I originally said but makes it hard to get your result.

3/3a Flattening the firm means each supervisor has more direct reports and it gets harder to filter info and give tailored directives. The benefits of middle layers decrease, cost decreases, but I'd argue you still have the potential for Gettysburg or Gallipoli.

Expand full comment
John Alcorn's avatar

Re: "the shortcomings and biases that an AI might have are similar in kind to those that middle management has. The difference is a matter of degree."

Earlier in the essay you note that middle management has its own *interests* that produce distortions beyond bias and noise.

Opportunism by agents (e.g., by middle management) — i.e., the pursuit of self-interest at odds with the principal's (or commander's or executive's) instructions and expectations — is different in kind from shortcomings and biases that a current AI might have. Is any current AI motivated by its own self-interest as it handles instructions given by those who employ it?

Or perhaps I misunderstand the psychology of AI?

Expand full comment
Handle's avatar

Right now most people are thinking in terms of generalist AIs. But human and biological history and economic theory weighs in favor of diversity of niches and a variety of entities each specially optimized to occupy and function in those niches. Very soon it will become common for multiple AI's to work together, either in limited sets of transactions or in 'firms' and 'organizations', taking advantage of division of, um, 'labor', and hard to copy abilities or efficiencies or accesses of specialist AIs.

That means AI project managers which, to win the ruthless competition for survival, will have to figure out how to solve the various kinds of alignment, collective action, and principle-agent problems for all their AI 'contractors' or 'employees' or 'partners'. Hopefully, whatever solutions they figure out, we humans can just copy and apply to the top-level AIs too, with regards to human welfare, interests, or values or something.

The trouble is, groups of humans often want to crush other groups of humans, and are going to create and use AIs which are necessarily going to be engineered specifically to severely discount the welfare, interests, and values of at least some humans.

Expand full comment
Michael von Prollius's avatar

I like the alignment problem model. One might also look at it in a Hayekian way, i.e. few people do not know enough about problems and consequences of action. There is no discovery process in bureaucratic systems as there are no markets.

Moreover that rises the question whether or not people in a bureaucracy, starting from the top, are truly interested in problem solving including a deep understanding of problems in the first place. Their interest might be different, political, personal - What's in it for me?

Expand full comment
luciaphile's avatar

Though it is somewhat more customary to pound on the left on this blog, I will say that "hallucination" seems a good word to describe fringe-y (or rather, people who should be fringe-y but actually tend to have the megaphone, at least at the state level) members of the GOP. (I use the term GOP advisedly, as these folks obviously have nothing of conservatism about them, nor could they.)

They will learn one real word, or hear and misconstrue one fact - then thereafter spin it into wild ravings, their one real word surrounded by a lot of nonsense. And indeed up the chain it goes, without thought on anyone's part, or examination.

Expand full comment
stu's avatar

Sounds like both parties to me.

Expand full comment
luciaphile's avatar

Was touching on something-or-other current events with the other party here, and he mentioned casually, without rancor but a little wonder, that people - our people, in some ways - are only getting dumber as the past few years have unfolded, despite what would seem lessons aplenty, big and little.

(Yes, yes, we don't say stupid here, because various social scientists have schooled us on that, throwing up to us our own imperfect vision. Fine. I doubt anyone here goes a day without thinking, if not uttering, that something or someone is stupid.)

And the form that takes on the left, to me, calls to mind the C.S. Lewis quote "There are a dozen views about everything until you know the answer. Then there’s never more than one."

There is on the left ever the many answers, some bright-sounding, some less so - to pick over, and over and over; and a reluctance or indeed allergy, to learning and stating the right one.

Expand full comment
stu's avatar

I would argue strongly that the problem is exactly the opposite - too many people thinking there is one right answer.

Sowell got it right, at least in the vast majority of circumstances. There are no solutions, only tradeoffs. I'd say that most of the time it isn't that other people are dumb, it is ourselves who don't see why their tradeoff is at least reasonable even if it isn't the one we would make.

Expand full comment
luciaphile's avatar

I read a tremendously well-written and researched book "Indianapolis" that I suppose must be the exception that proves your rule about urgency and exaggeration and false positives.

In that case, as best I recall, there was a curious (attention had fallen off toward the close of the war?) and prolonged indifference to the various hints as well as direct messages, that a Navy cruiser carrying 1200 men, fresh off dropping off one of the atom bombs, had fallen off the radar.

Expand full comment
Cranmer, Charles's avatar

I would add that in my career on Wall Street, I learned that the mark of a really good CEO is that he (or she. whew, almost blew that one) allows and even encourages BAD news to get to the top unimpeded. CEO's who want only good news will never be able to respond to events pro actively and will ultimately be blindsided by a really big problem

Let's just say that this rule has implications for our current president, certainly one of the worst CEO's who ever lived.

Expand full comment
Roger Sweeny's avatar

"Also, those down in the chain of command have their own interests and biases. They decide what they want their superiors to know. What they pass up the chain may omit crucial facts, be misleading, or could be downright false. Organizational hallucination results.²"

According to The Best and The Brightest, this was a major problem for the US in the Vietnam War. McNamara's strategy was basically, "we will kill so many of them compared to how many of our Vietnamese they kill that they will run out of fighters." Which made it imperative to have a high "kill ratio" of bad guys to good guys. So the people near the bottom of the organization sent up wildly inaccurate kill figures, counting almost anyone killed as an enemy. The top brass loved seeing that, and couldn't understand how the enemy could keep fighting at a high level.

Expand full comment
luciaphile's avatar

The write-up the Washington Post did (five or six years ago?) of the "Long War" was damningly similar in terms of information flow, or rather information stanching and silo-ing.

I still think of that piece as a kind of last gasp of the mainstream media.

Prior to that, I had considered The Last Journalism a piece the NYT once did on the fates of those that China forced off the land and into highrises, with nothing to do but play video games, the lives they had lived only moments earlier, immediately memorialized in an absurd museum.

Now that I think about it, it must take a real effort of will, for an actual reporter to bring their work to the attention of the people on the upper floors.

Expand full comment
Nathan Smith's avatar

The conclusion here seems innocuous to the point of triviality, yet it's really only obvious ex post. Your theory gives it additional meaning. It's a great example of putting old theories to work helping us grapple with a changing present and an uncertain future. Thanks!

Expand full comment
Charles Powell's avatar

We see AI everywhere but in the productivity statistics.

Expand full comment
Handle's avatar

In the chip production, data center construction, and electrical power consumption statistics, on the other hand ... Or how about the Waymo ridership or OpenAI subscriptions growth rate statistics. "It's happening."

On the other hand, labor productivity figures from BLS are kind of crazy, even for industries with easy to measure commodity outputs, with closely related sectors showing anything from 30 percent drops or 70 percent improvements in just the past 7 years. E.g., "metal ore mining" down 28%, but "mining" up 40%. I suspect that, similar to what happens where I work, it's not human middlemen lying or massaging or manipulating the data to be at odds with the true picture, but bureaucrats reporting exactly the data they are supposed to, arrived at in exactly the way they were told to do it, it's just, the output of specified garbage analysis is also useless garbage. Leaders and "customers" often take the very existence of reported data as a kind of proof that it must have some valid theoretical basis and mean something important and useful. Adequately data-skeptical / standard-analysis-reevaluating management is extremely rare.

Expand full comment