The key question is whether AI reduces transaction costs more in the market or within the firm. My guess (and it’s only a guess) is within the firm. Large organizations have trouble understanding what is going on within the organization. The activities of the organization are opaque to its leaders. Principal-agent problems abound. The organization becomes sclerotic and unresponsive. This is the classic complaint about bureaucracy.
But bureaucracy is actually a technology to deal with the very real problem of coordinating activity across multiple agents in an organization. It does experience diseconomies of scale however, which limits the efficient size of firms. I suspect AI will greatly lower the costs of bureaucracy by making the organization more legible to its leaders. This will allow firms to expand in size, scope, and complexity. (Of course, on the margin they’ll be just as dysfunctional as they are now, but the margin will have moved out considerably.)
It is a separate question how many of the agents in these larger firms will be AI and how many will be human. There will be a mix, with tasks assigned based on comparative advantage (not absolute advantage!). Exactly what that mix will be, and how it will vary across industries, is anyone’s guess. I think we just don’t know right now where human and AI comparative advantages will fall.
Your substack and Gioia's are two of only a handful that I pay for (I subscribe and read a lot of the free versions of others). I have found him insightful but sometimes wrong on the merits as he appears to be here. I would encourage you to invite him onto a live event similar to what you did with Lyman Stone https://www.youtube.com/watch?v=XLvnJGN0A9A I think it would be a very good conversation.
As the world's youngest dinosaur I agree people spend too much time on their phones and too much time in front of screens in general. Way back in the year 2000 the largest rear projection TVs would probably cost around $4000, and the largest sizes were under 70 inches, and this at a time where people made a lot less money than now. Now you can buy an 85 inch TV at Walmart for $700 and if you put it side by side with a TV from 2000 it would make the old rear projection TV look like a complete piece of crap. You also had to leave your house to get a movie from Blockbuster, and if you have to leave the house maybe why not head to a theatre, especially since your grainy tv isn't that big or good anyway. I would like to see some numbers with regard to phone time versus other screen time, but I suspect people are watching more movies than ever in their own home theatres. See this podcast with Razib Khan talking to working screenwriter Zach Stentz, "nearly 50 million households around the world ended up seeing seeing my little $20 million you know, kids versus aliens movie 50 million people would not have seen that in the in the movie theater." - https://www.razibkhan.com/p/zack-stentz-andromeda-to-x-men
Thank you for the concise and spot on response to Gioia! His constant carping about poor creators drives me nuts and almost has me quitting reading him entirely. You said it much better than I could.
To modify your statement - “In Substack today, the hard part now is not creating content or making it available. The hard part is getting attention.” Seems to be an example of supply exploding; many, many writers complaining about small readership with a few garnering the lion's share.
The Dwarkesh stuff is interesting, sure. But IMO any discussions of likely outcomes when we truly have AGI are pointless - or at least, simply science fiction speculations which may give insights, but there is no way to logically reason to anything close to conclusive or even probable. Once we have full AGI, “the singularity” seems more likely than most of the conclusions in this piece.
I think I’m with AK that the more interesting case is when we have much smarter AIs that aren’t quite AGI. I surely concur with the idea that there can and likely will be far fewer middle managers in such a scenario. But fewer is different than none.
And it seems to me that humans are much more likely to be the ones doing the experimentation and generating the important data/knowledge that the AIs will then absorb and replicate well.
But having noted that point, it seems to me Dwarkesh and his coauthors lack the real world experience of what frontline employees and individual contributors who are not coders do. They do surely understand what middle management knowledge workers do - as it seems most analogous to what most in academia do - and so the logic/conclusions seem somewhat persuasive there.
But they seem to simply blindly assert, almost without basis, that said logic applies to all other roles within a firm. Maybe the “fish in water” analogy applies here.
The essay talks about digital. OK. What about businesses that actually make stuff? Today, I drove by a gargantuan...gargantuan, meaning bigger than any factory I've ever seen, bigger than the entire Rouge Complex in Detroit...manufacturing facility in Guangu...(Optics Valley) Wuhan. The parking lot had spaces for about 2 dozen cars. It's full of robots. I know someone who's working there. He told me it's all robots. The few humans on each shift are only there to monitor robotic activity.
A lot of outsourcing from and immigration into developed countries over the past two generations has been for doing things with human robots that could have been done by mechanical robots a long time ago, because certain human robots in certain places were still cheaper than using mechanical robots. That window is closing fast.
There is so much FUD about AI and for good reason. AI is the perfect sales pitch. AI sells more computer chips. AI sells more consultants and service contracts. AI conveys technological brilliance and novelty. AI, AI , AI.
But what can AI actually do besides providing a wow factor that otherwise lacks substance? Well, if one hires the software programmers, one can build an AI model tailored to one's data and AI can likely produce informative responses to queries about that data.
In other words, if one spends money, one can get something in return. Does one get back more then what one put in? The answer is unsettled. For once the questions change, more AI programming is needed. This points to AI being just another computer service that businesses must expense and not an exponentially superior solution.
One of the fundamental flaws with AI is that to simulate thinking it has to make errors. This means AI requires error correction. Who does this? And at what expense? Humans learn by making errors, but they are able to recognize their errors! AI, at least from what I've read, is so lacking self awareness - well it cannot have self awareness - that it leans on feedback to correct itself. In other words, AI can be manipulated!
No business is going to deploy a customer facing information system that customers can manipulate. And there is risk in deploying such a system internally in a business. This means the AI that is put to use will need to be deliberately coded to be just a machine. And you don't need all the AI infrastructure for machine automation.
"Humans learn by making errors, but they are able to recognize their errors! AI ... cannot have self awareness - that it leans on feedback to correct itself. In other words, AI can be manipulated!"
Good thing humans can't be manipulated! And that they always recognize their errors.
The key difference is because we humans know of the fallibility of humans, we implement checks so that a single human cannot do great harm. What would be the equivalent system of checks for AI? How does one AI system get elevated to be supervisor over other AI systems?
“In entertainment today, the hard part now is not creating content or making it available. The hard part is getting attention.“ Arnold Kling’s number one issue is higher education. He is obsessed with individual and social learning. Further down on his list of priorities is entertainment (or merely gaining more readers).
Why does Ted Gioia have an astonishing 218K subscribers while Kling has a mere 9.3K?
Perhaps because most people do not care about learning like Arnold Kling does. Most people tend to follow the herd. They conform to simplistic metaphors rather than pursue better understanding and more sophisticated metaphors.
Why is this? Why do some people care much more about truth and learning than others?
The overall sense I get from these quotes is that AI is forcing people to readdress Coase's article on "Why are there firms?" It looks as if AI can (or will) greatly reduce transactions and search costs for different activities so that we will likely see more, smaller firms or enterprises in many areas in the future.
There are lots of costs to getting humans in your firm to do what you want them to do. There is a high and on-average constant marginal cost of training every additional human what you want. With AI, the fixed cost might be lower - certainly faster - and the marginal cost could be zero.
For publishing and creating content, AI will replace humans. For solving problems I am unconvinced AI will be anything more than a tool. A potentially useful tool, but just another tool. The reason is that AI is not human and it will not be human and thus it will always require human input to know what problem to solve.
Left to its own devices, AI will solve the human problem by killing the humans. Not because AI is nefarious, but because it is dumb. I believe I read an anecdote where someone showed this was one of the recommendations AI gave in response to a human concern.
To be fair, humans also have a long track record of recommending "kill lots of particular humans" when proposing solutions to various problems. "Kill ALL the humans" is the only novelty.
The key question is whether AI reduces transaction costs more in the market or within the firm. My guess (and it’s only a guess) is within the firm. Large organizations have trouble understanding what is going on within the organization. The activities of the organization are opaque to its leaders. Principal-agent problems abound. The organization becomes sclerotic and unresponsive. This is the classic complaint about bureaucracy.
But bureaucracy is actually a technology to deal with the very real problem of coordinating activity across multiple agents in an organization. It does experience diseconomies of scale however, which limits the efficient size of firms. I suspect AI will greatly lower the costs of bureaucracy by making the organization more legible to its leaders. This will allow firms to expand in size, scope, and complexity. (Of course, on the margin they’ll be just as dysfunctional as they are now, but the margin will have moved out considerably.)
It is a separate question how many of the agents in these larger firms will be AI and how many will be human. There will be a mix, with tasks assigned based on comparative advantage (not absolute advantage!). Exactly what that mix will be, and how it will vary across industries, is anyone’s guess. I think we just don’t know right now where human and AI comparative advantages will fall.
Your substack and Gioia's are two of only a handful that I pay for (I subscribe and read a lot of the free versions of others). I have found him insightful but sometimes wrong on the merits as he appears to be here. I would encourage you to invite him onto a live event similar to what you did with Lyman Stone https://www.youtube.com/watch?v=XLvnJGN0A9A I think it would be a very good conversation.
As the world's youngest dinosaur I agree people spend too much time on their phones and too much time in front of screens in general. Way back in the year 2000 the largest rear projection TVs would probably cost around $4000, and the largest sizes were under 70 inches, and this at a time where people made a lot less money than now. Now you can buy an 85 inch TV at Walmart for $700 and if you put it side by side with a TV from 2000 it would make the old rear projection TV look like a complete piece of crap. You also had to leave your house to get a movie from Blockbuster, and if you have to leave the house maybe why not head to a theatre, especially since your grainy tv isn't that big or good anyway. I would like to see some numbers with regard to phone time versus other screen time, but I suspect people are watching more movies than ever in their own home theatres. See this podcast with Razib Khan talking to working screenwriter Zach Stentz, "nearly 50 million households around the world ended up seeing seeing my little $20 million you know, kids versus aliens movie 50 million people would not have seen that in the in the movie theater." - https://www.razibkhan.com/p/zack-stentz-andromeda-to-x-men
Thank you for the concise and spot on response to Gioia! His constant carping about poor creators drives me nuts and almost has me quitting reading him entirely. You said it much better than I could.
To modify your statement - “In Substack today, the hard part now is not creating content or making it available. The hard part is getting attention.” Seems to be an example of supply exploding; many, many writers complaining about small readership with a few garnering the lion's share.
The Dwarkesh stuff is interesting, sure. But IMO any discussions of likely outcomes when we truly have AGI are pointless - or at least, simply science fiction speculations which may give insights, but there is no way to logically reason to anything close to conclusive or even probable. Once we have full AGI, “the singularity” seems more likely than most of the conclusions in this piece.
I think I’m with AK that the more interesting case is when we have much smarter AIs that aren’t quite AGI. I surely concur with the idea that there can and likely will be far fewer middle managers in such a scenario. But fewer is different than none.
And it seems to me that humans are much more likely to be the ones doing the experimentation and generating the important data/knowledge that the AIs will then absorb and replicate well.
But having noted that point, it seems to me Dwarkesh and his coauthors lack the real world experience of what frontline employees and individual contributors who are not coders do. They do surely understand what middle management knowledge workers do - as it seems most analogous to what most in academia do - and so the logic/conclusions seem somewhat persuasive there.
But they seem to simply blindly assert, almost without basis, that said logic applies to all other roles within a firm. Maybe the “fish in water” analogy applies here.
The criticism of Gioa here is spot on!
The essay talks about digital. OK. What about businesses that actually make stuff? Today, I drove by a gargantuan...gargantuan, meaning bigger than any factory I've ever seen, bigger than the entire Rouge Complex in Detroit...manufacturing facility in Guangu...(Optics Valley) Wuhan. The parking lot had spaces for about 2 dozen cars. It's full of robots. I know someone who's working there. He told me it's all robots. The few humans on each shift are only there to monitor robotic activity.
A lot of outsourcing from and immigration into developed countries over the past two generations has been for doing things with human robots that could have been done by mechanical robots a long time ago, because certain human robots in certain places were still cheaper than using mechanical robots. That window is closing fast.
I don't follow Gioia, but being a digital creator just is not a job. It's self expression.
There is so much FUD about AI and for good reason. AI is the perfect sales pitch. AI sells more computer chips. AI sells more consultants and service contracts. AI conveys technological brilliance and novelty. AI, AI , AI.
But what can AI actually do besides providing a wow factor that otherwise lacks substance? Well, if one hires the software programmers, one can build an AI model tailored to one's data and AI can likely produce informative responses to queries about that data.
In other words, if one spends money, one can get something in return. Does one get back more then what one put in? The answer is unsettled. For once the questions change, more AI programming is needed. This points to AI being just another computer service that businesses must expense and not an exponentially superior solution.
One of the fundamental flaws with AI is that to simulate thinking it has to make errors. This means AI requires error correction. Who does this? And at what expense? Humans learn by making errors, but they are able to recognize their errors! AI, at least from what I've read, is so lacking self awareness - well it cannot have self awareness - that it leans on feedback to correct itself. In other words, AI can be manipulated!
No business is going to deploy a customer facing information system that customers can manipulate. And there is risk in deploying such a system internally in a business. This means the AI that is put to use will need to be deliberately coded to be just a machine. And you don't need all the AI infrastructure for machine automation.
"Humans learn by making errors, but they are able to recognize their errors! AI ... cannot have self awareness - that it leans on feedback to correct itself. In other words, AI can be manipulated!"
Good thing humans can't be manipulated! And that they always recognize their errors.
The key difference is because we humans know of the fallibility of humans, we implement checks so that a single human cannot do great harm. What would be the equivalent system of checks for AI? How does one AI system get elevated to be supervisor over other AI systems?
Sometimes "we" implement checks; sometimes "we" don't. Genghis Khan, Stalin, Hitler, Mao, the three Kims.
(I have no idea how to check AI.)
“In entertainment today, the hard part now is not creating content or making it available. The hard part is getting attention.“ Arnold Kling’s number one issue is higher education. He is obsessed with individual and social learning. Further down on his list of priorities is entertainment (or merely gaining more readers).
Why does Ted Gioia have an astonishing 218K subscribers while Kling has a mere 9.3K?
Perhaps because most people do not care about learning like Arnold Kling does. Most people tend to follow the herd. They conform to simplistic metaphors rather than pursue better understanding and more sophisticated metaphors.
Why is this? Why do some people care much more about truth and learning than others?
The overall sense I get from these quotes is that AI is forcing people to readdress Coase's article on "Why are there firms?" It looks as if AI can (or will) greatly reduce transactions and search costs for different activities so that we will likely see more, smaller firms or enterprises in many areas in the future.
There are lots of costs to getting humans in your firm to do what you want them to do. There is a high and on-average constant marginal cost of training every additional human what you want. With AI, the fixed cost might be lower - certainly faster - and the marginal cost could be zero.
For publishing and creating content, AI will replace humans. For solving problems I am unconvinced AI will be anything more than a tool. A potentially useful tool, but just another tool. The reason is that AI is not human and it will not be human and thus it will always require human input to know what problem to solve.
Left to its own devices, AI will solve the human problem by killing the humans. Not because AI is nefarious, but because it is dumb. I believe I read an anecdote where someone showed this was one of the recommendations AI gave in response to a human concern.
To be fair, humans also have a long track record of recommending "kill lots of particular humans" when proposing solutions to various problems. "Kill ALL the humans" is the only novelty.