Tyler Cowen on using LLMs as a professor; Ben Thompson on AI disrupting the Web Ad model; Charlie Guo explains the disruption; Sholto Douglas on rapid AI progress
Tyler's essay is one of the most thoughtful essays I've read this year, however it could be even more optimistic than it is. I would say that AI is the most liberating technology in my lifetime, perhaps most liberating for men.
Arnold says: I continue to believe that the biggest risk you can take of making a fool of yourself is to make statements of the form “AI will never be able to ____.”
Well, let me prudently go down that road -- that's where the comparative advantages are, maybe? Because if AI can easily do something, then maybe you shouldn't make that your living. Unless of course you like the idea of doing a lot more of it (because of AI), e.g. see Tyler's example of the young man with five programming jobs because of AI. In other words, perhaps the expectation will become: "Hey, you should be able to do 5X with AI, and if you aren't, we don't want you." In the construction trades, sheet rock and framing is considered volume work. Volume work is lower status and physically draining. So, will AI spawn new categories of volume work? Probably.
Instead of saying "never" let's say it will be a "long time" before AI can replace a plumber, a finish carpenter, an electrician, a farmer, certain welding tasks, and anything that would require a custom-designed robot to accomplish a specific task. If your career involves many such different tasks, then AI will not replace you anytime soon. Right now AI can teach plumbing, but it can't do plumbing and it will be a long time before it can. It will be a long time before AI can do any type of custom building and fixing of physical things. The human body will be superior to AI for a long time in this area. I see this as being a good thing for men. Perhaps AI is more a threat to women and professors?
I suspect we just aren't counting things right. Consider graphic design output, and imagine it like a black box. You put prompts and money in, there are people inside doing God knows what, and graphics come out. It seems to me that if you compare snapshots from ten years ago to today, and divide output in graphics by people in the box, the rate of "real productivity" has gone up by 100 percent. Oh sorry, left a word out. I meant 100 MILLION percent! If one instead divides money by people, it's not going to be clear what happened and that there has been a total revolution in productivity, because the price of providing comparable substitutes simultaneously also suddenly cratered 99.99%.
In the 90's, people who were ahead of the curve would look around the office, see what people were doing, and think, "this could be automated right now" and "that guy's nice middleman job could just be a website." They were right. But it still took many years to actually happen, and not because it was too difficult or complicated or costly.
It takes years for people who get to make decisions about it to get comfortable with doing things a new way, or to use "consultants" selling the software and providing "advice" as the cover story for big changes that would generate a lot of internal acrimony. I look at the offices around me now and I not only see lots of (indeed most) work that "could be automated right now", but the real breakthrough capability created by the new AI tools is that it is work that could be automated -by me- if I were allowed and incentivized to do it. But I won't be, so nothing changes for a long time.
"that guy's nice middleman job could just be a website."
The key difference is that even without any legal issues and misaligned management incentives, you couldn’t just replace the middleman with a website and leave everything else the same. A lot of other parties would also need to change the way they work.
Whereas at least in the case where the middleman is a remote worker, with a sufficiently advanced AI, you could do exactly that.
It will be a bitter irony if it turns out that modern white-collar workers have only expedited their obsolescence by insisting on remote work privileges. This has put them in a situation where their automated replacement can be deployed in a perfectly seamless way - literally, their laptops get disconnected and another computer connected in their place, only this time without a human operator.
Of course, this assumes the availability of a sufficiently competent and reliable AI, and nobody knows when exactly that is coming. But when it does arrive, to me it seems that laptop jobs will have very little friction and inertia standing in the way of their replacement.
There's no irony there. The only insistence I see these days is organizations insisting that employees return to the office. For employees I don't see insistence so much as bluffing and whining followed by folding and acquiescence (I speak from experience).
Insistence just doesn't mean anything without the bargaining power that comes from being able to slide into lots of equally attractive opportunities with low friction. For a short window of time, that weighed in favor of laptop class employees, but now, it's back with the organizations. For employees who could be relief upon to be very productive in remote work conditions, this is really unfortunate. But most employees are not like that, and most supervisors (100% of the ones I've spoken to) think it's far too difficult to do anything but have a uniform policy for everyone.
I agree that “insistence” was the wrong word to use here. My main point was however a technical one: a remote work arrangement necessarily rests on the assumption that the productive output of an employee consists solely of a stream of bits transmitted through a wire. And therefore, any system capable of producing an equivalent stream of bits would be an instant, perfect, frictionless, zero-downside replacement for that employee.
What I meant by the irony is that what was once seen by employees as a benefit (i.e., the option to work remotely) may turn out to have been a key factor in making their own automation and obsolescence much easier and faster. Because it has given employers direct practical experience with an arrangement where a worker is tied to the workplace solely with an internet connection. So once a fully capable AI arrives, it just falls into place seamlessly, like a final piece of the puzzle. (Very unlike the old example of automating a middleman with a website.)
The thing is what people in my department are doing isn’t automated work. It’s strategy. It’s by definition different every bid season and indeed changes week to week as new information becomes available. We just had to make a change on Friday. Even the stuff you might think could be “automated”, AI just doesn’t seem to do well. Like I can’t even get it to pull some info off a website. In better off downloading the .csv myself, it only takes a minute and I do it right without errors.
Now, my friend that needs code written and uses similar code to the last 100 times and can use a gian code base to fix issues he’s seen before, yes he can automate that with AI.
We are so used to the internet now that certain feelings and experiences of how people interacted with the early internet are quickly fading from the memories of even the people who were there and lived through it. This is "the past is a foreign country" stuff that one can hardly explain at all to young people.
For example, try to tell someone under 30 that "Merely using internet search engines like Google to look things up and find websites used to be something that you could be -really good- at in the same way you could be good at a sport. For longer than you would think after these things came out, most people were actually not good at it, knew this, knew there were the equivalent of "search athletes" out there, and would have to ask for help and they would be extremely impressed and grateful and call you a wizard without irony if with just a little explanation you were able to get the result they wanted in under a minute whereas it might take them an hour or more likely they would just give up in despair, and even -blame- the stupid search engine. Eventually people got better at search, and search also got better at providing the results people wanted when putting in their badly-composed queries, and so it just became ordinary life and not a skill. Then search suddenly became complete crap and even wizards can't make it do what they want, but that's a different story."
They will look at you funny, but these are facts, this is exactly how it really was. That's how it is with AI tool wielding and prompt-engineering now. Currently, a very small fraction of people have already gotten extremely good at "speaking AI" and being familiar with all the different tools and capabilities out there and knowing how to efficiently get the tools to do what they want. Lots of people are using the tools at a complete dabbler or tyro level, but with novice results as consequence of bare minimum investments of skill and effort.
Five years ago, a state of the art AI could hardly string together a coherent sentence. These things are advancing very rapidly.
Even now, if you’ve only used freely available models that are some months behind the most advanced ones, you’ll have a very incorrect idea of what the presently available capabilities are.
Of course, the models are still unable to achieve the level of reliability and common-sense handling of unusual cases that can be expected from a competent human worker. And it’s hard to tell how far in the future such capabilities are.
We have, so to speak, exponential advances running into exponential complexity of the real world, and one can only speculate when the critical competence threshold will be reached for any particular useful skill. But it would be naive to dismiss at least a plausible chance that it’s coming for most white collar work quite soon.
I retired as an organic chemist 14 years ago- I could have used AI as it currently stands extensively if it had been available at that time. Almost all of the scut paperwork I did would be done today in about 0.1% of the time. I think the only issue would have been my employer not wanting AI used on anything connected to intellectual property which would have been almost 100% of my job.
My organization is stuck in a similar bind. It wants to use AI, but it can't make the tools on its own in-house, so it has to figure out a way to use only-slightly-tweaked versions of the cutting edge commercial stuff, but it doesn't want that stuff to have access to anything sensitive and to be able to reach out to the public internet if there's any chance of the sensitive stuff spilling out. So, the prototypes we have are necessarily 99.9% useless and unimpressive in terms of demonstrating use cases that might excite anyone, which is like forcing a product to advertise for its own rejection.
I am an official, if not a complete Luddite, a late-late-late adapter . . . and I do not "intentionally use" AI, but I see it creeping into my life: for example, on Amazon, when thinking about buying a book, anymore I just read the AI summary of reviews instead of the actual reviews. A bad AI use right now is the Yahoo Mail summarizer, which generally gets the content of the email exactly bassackwards--and I cannot figure out how to turn off that "feature" which is actually a bug.
IBM failed to follow my advice to make Watson an English tutor, as so many Americans can be ESL teachers throughout the world— but that optin is rapidly drying up. A personalized ai talking tutor is coming, and likely earlier in poorer areas. And maybe homeschooling pods of small groups of kids whose parents share similar values. A Young Girl’s Illustrated Primer, or almost, for all those who now go to lousy schools.
So more young folk can achieve their Full Potential. Unfortunately, for those with below average IQ max potential, it won’t make them smarter than average.
The other video ai shock is how good it got, now it can talk. But what does it have to say?
Fake / ai OnlyFans porn might already be here, or somewhere (heaven forbid explicit sex on Arnold’s blog. So much of it elsewhere I’m comfy here without it.) so the real girls might need more explicit 526 men in 24 hour events of sex with all, or something.
Ai will make it easier to DeepFake crimes against the innocents. We need a Digital Utilities Commission to require ai generated stuff to be identified as such. There will probably need to be big false accusations first.
To the extent that AI enables greater use of micro-payment for content it will be because it obscures the fact that someone is requesting payment for their content. The fundamental problem is that so many people are willing to produce content for free (essentially, for non-monetary compensation) that even requesting a small fee is an immediate cause for rejection in favor of competing content. Just ask anybody trying to make a living, or even expenses, as a musician especially a vocal soloist.
I didn't read Tyler's piece as it's paywalled, but with advanced AI doing all the work other than networking, what are you networking for? It seems to assume an unchanged academic job market (and maybe it will last a while).
If one should read more realistic fiction to learn about psychology, one should probably read more science fiction to attempt to think about the intersection of human psychology and AI. Science fiction by people for people has to have a level of psychological plausibility of people to be appealing to people. Not mentioned in the lists below is the Dune series, which has a minimum of 5 different examples (models?) of superintelligences, 3 human and 2 non-human.
"what are science fiction books with artificial intelligence networks?"
I liked the first 4 a lot, it definitely gets weirder in 5 and 6. I debated whether to read Brian Herbert's finishing of the story, but I really wanted to see where it went in the end.
Yeah, the first four novels definitely are better than Frank Herbert's last two and I think it was because the first four are intimately connected together in a broad narrative arc. Herbert then wanted to go in a different direction after "God Emperor of Dune". I also read the son's finishing of the 2nd narrative arc and was less impressed but was entertained.
Tyler's essay is one of the most thoughtful essays I've read this year, however it could be even more optimistic than it is. I would say that AI is the most liberating technology in my lifetime, perhaps most liberating for men.
Arnold says: I continue to believe that the biggest risk you can take of making a fool of yourself is to make statements of the form “AI will never be able to ____.”
Well, let me prudently go down that road -- that's where the comparative advantages are, maybe? Because if AI can easily do something, then maybe you shouldn't make that your living. Unless of course you like the idea of doing a lot more of it (because of AI), e.g. see Tyler's example of the young man with five programming jobs because of AI. In other words, perhaps the expectation will become: "Hey, you should be able to do 5X with AI, and if you aren't, we don't want you." In the construction trades, sheet rock and framing is considered volume work. Volume work is lower status and physically draining. So, will AI spawn new categories of volume work? Probably.
Instead of saying "never" let's say it will be a "long time" before AI can replace a plumber, a finish carpenter, an electrician, a farmer, certain welding tasks, and anything that would require a custom-designed robot to accomplish a specific task. If your career involves many such different tasks, then AI will not replace you anytime soon. Right now AI can teach plumbing, but it can't do plumbing and it will be a long time before it can. It will be a long time before AI can do any type of custom building and fixing of physical things. The human body will be superior to AI for a long time in this area. I see this as being a good thing for men. Perhaps AI is more a threat to women and professors?
We see AI everywhere but in the productivity statistics
I suspect we just aren't counting things right. Consider graphic design output, and imagine it like a black box. You put prompts and money in, there are people inside doing God knows what, and graphics come out. It seems to me that if you compare snapshots from ten years ago to today, and divide output in graphics by people in the box, the rate of "real productivity" has gone up by 100 percent. Oh sorry, left a word out. I meant 100 MILLION percent! If one instead divides money by people, it's not going to be clear what happened and that there has been a total revolution in productivity, because the price of providing comparable substitutes simultaneously also suddenly cratered 99.99%.
I've talked to people who use AI all the time and I credibly believe them.
Also, nobody in my department uses AI and my one attempt failed hard.
In the 90's, people who were ahead of the curve would look around the office, see what people were doing, and think, "this could be automated right now" and "that guy's nice middleman job could just be a website." They were right. But it still took many years to actually happen, and not because it was too difficult or complicated or costly.
It takes years for people who get to make decisions about it to get comfortable with doing things a new way, or to use "consultants" selling the software and providing "advice" as the cover story for big changes that would generate a lot of internal acrimony. I look at the offices around me now and I not only see lots of (indeed most) work that "could be automated right now", but the real breakthrough capability created by the new AI tools is that it is work that could be automated -by me- if I were allowed and incentivized to do it. But I won't be, so nothing changes for a long time.
"that guy's nice middleman job could just be a website."
The key difference is that even without any legal issues and misaligned management incentives, you couldn’t just replace the middleman with a website and leave everything else the same. A lot of other parties would also need to change the way they work.
Whereas at least in the case where the middleman is a remote worker, with a sufficiently advanced AI, you could do exactly that.
It will be a bitter irony if it turns out that modern white-collar workers have only expedited their obsolescence by insisting on remote work privileges. This has put them in a situation where their automated replacement can be deployed in a perfectly seamless way - literally, their laptops get disconnected and another computer connected in their place, only this time without a human operator.
Of course, this assumes the availability of a sufficiently competent and reliable AI, and nobody knows when exactly that is coming. But when it does arrive, to me it seems that laptop jobs will have very little friction and inertia standing in the way of their replacement.
There's no irony there. The only insistence I see these days is organizations insisting that employees return to the office. For employees I don't see insistence so much as bluffing and whining followed by folding and acquiescence (I speak from experience).
Insistence just doesn't mean anything without the bargaining power that comes from being able to slide into lots of equally attractive opportunities with low friction. For a short window of time, that weighed in favor of laptop class employees, but now, it's back with the organizations. For employees who could be relief upon to be very productive in remote work conditions, this is really unfortunate. But most employees are not like that, and most supervisors (100% of the ones I've spoken to) think it's far too difficult to do anything but have a uniform policy for everyone.
I agree that “insistence” was the wrong word to use here. My main point was however a technical one: a remote work arrangement necessarily rests on the assumption that the productive output of an employee consists solely of a stream of bits transmitted through a wire. And therefore, any system capable of producing an equivalent stream of bits would be an instant, perfect, frictionless, zero-downside replacement for that employee.
What I meant by the irony is that what was once seen by employees as a benefit (i.e., the option to work remotely) may turn out to have been a key factor in making their own automation and obsolescence much easier and faster. Because it has given employers direct practical experience with an arrangement where a worker is tied to the workplace solely with an internet connection. So once a fully capable AI arrives, it just falls into place seamlessly, like a final piece of the puzzle. (Very unlike the old example of automating a middleman with a website.)
The thing is what people in my department are doing isn’t automated work. It’s strategy. It’s by definition different every bid season and indeed changes week to week as new information becomes available. We just had to make a change on Friday. Even the stuff you might think could be “automated”, AI just doesn’t seem to do well. Like I can’t even get it to pull some info off a website. In better off downloading the .csv myself, it only takes a minute and I do it right without errors.
Now, my friend that needs code written and uses similar code to the last 100 times and can use a gian code base to fix issues he’s seen before, yes he can automate that with AI.
We are so used to the internet now that certain feelings and experiences of how people interacted with the early internet are quickly fading from the memories of even the people who were there and lived through it. This is "the past is a foreign country" stuff that one can hardly explain at all to young people.
For example, try to tell someone under 30 that "Merely using internet search engines like Google to look things up and find websites used to be something that you could be -really good- at in the same way you could be good at a sport. For longer than you would think after these things came out, most people were actually not good at it, knew this, knew there were the equivalent of "search athletes" out there, and would have to ask for help and they would be extremely impressed and grateful and call you a wizard without irony if with just a little explanation you were able to get the result they wanted in under a minute whereas it might take them an hour or more likely they would just give up in despair, and even -blame- the stupid search engine. Eventually people got better at search, and search also got better at providing the results people wanted when putting in their badly-composed queries, and so it just became ordinary life and not a skill. Then search suddenly became complete crap and even wizards can't make it do what they want, but that's a different story."
They will look at you funny, but these are facts, this is exactly how it really was. That's how it is with AI tool wielding and prompt-engineering now. Currently, a very small fraction of people have already gotten extremely good at "speaking AI" and being familiar with all the different tools and capabilities out there and knowing how to efficiently get the tools to do what they want. Lots of people are using the tools at a complete dabbler or tyro level, but with novice results as consequence of bare minimum investments of skill and effort.
But I expect that to change quick.
Five years ago, a state of the art AI could hardly string together a coherent sentence. These things are advancing very rapidly.
Even now, if you’ve only used freely available models that are some months behind the most advanced ones, you’ll have a very incorrect idea of what the presently available capabilities are.
Of course, the models are still unable to achieve the level of reliability and common-sense handling of unusual cases that can be expected from a competent human worker. And it’s hard to tell how far in the future such capabilities are.
We have, so to speak, exponential advances running into exponential complexity of the real world, and one can only speculate when the critical competence threshold will be reached for any particular useful skill. But it would be naive to dismiss at least a plausible chance that it’s coming for most white collar work quite soon.
I retired as an organic chemist 14 years ago- I could have used AI as it currently stands extensively if it had been available at that time. Almost all of the scut paperwork I did would be done today in about 0.1% of the time. I think the only issue would have been my employer not wanting AI used on anything connected to intellectual property which would have been almost 100% of my job.
My organization is stuck in a similar bind. It wants to use AI, but it can't make the tools on its own in-house, so it has to figure out a way to use only-slightly-tweaked versions of the cutting edge commercial stuff, but it doesn't want that stuff to have access to anything sensitive and to be able to reach out to the public internet if there's any chance of the sensitive stuff spilling out. So, the prototypes we have are necessarily 99.9% useless and unimpressive in terms of demonstrating use cases that might excite anyone, which is like forcing a product to advertise for its own rejection.
I am an official, if not a complete Luddite, a late-late-late adapter . . . and I do not "intentionally use" AI, but I see it creeping into my life: for example, on Amazon, when thinking about buying a book, anymore I just read the AI summary of reviews instead of the actual reviews. A bad AI use right now is the Yahoo Mail summarizer, which generally gets the content of the email exactly bassackwards--and I cannot figure out how to turn off that "feature" which is actually a bug.
It’s a surprise to me none are quoting the ai advances in edu from Nigerian,
https://blogs.worldbank.org/en/education/From-chalkboards-to-chatbots-Transforming-learning-in-Nigeria
(Did I miss an earlier post?)
IBM failed to follow my advice to make Watson an English tutor, as so many Americans can be ESL teachers throughout the world— but that optin is rapidly drying up. A personalized ai talking tutor is coming, and likely earlier in poorer areas. And maybe homeschooling pods of small groups of kids whose parents share similar values. A Young Girl’s Illustrated Primer, or almost, for all those who now go to lousy schools.
So more young folk can achieve their Full Potential. Unfortunately, for those with below average IQ max potential, it won’t make them smarter than average.
The other video ai shock is how good it got, now it can talk. But what does it have to say?
https://thenewneo.com/2025/05/23/ai-is-it-real-or-is-it-memorex/
Fake / ai OnlyFans porn might already be here, or somewhere (heaven forbid explicit sex on Arnold’s blog. So much of it elsewhere I’m comfy here without it.) so the real girls might need more explicit 526 men in 24 hour events of sex with all, or something.
Ai will make it easier to DeepFake crimes against the innocents. We need a Digital Utilities Commission to require ai generated stuff to be identified as such. There will probably need to be big false accusations first.
" We need a Digital Utilities Commission to require ai generated stuff to be identified as such."
Why a commission? Why not a one sentence law requiring it? And then a big publicity campaign to make sure people know the law.
To the extent that AI enables greater use of micro-payment for content it will be because it obscures the fact that someone is requesting payment for their content. The fundamental problem is that so many people are willing to produce content for free (essentially, for non-monetary compensation) that even requesting a small fee is an immediate cause for rejection in favor of competing content. Just ask anybody trying to make a living, or even expenses, as a musician especially a vocal soloist.
I didn't read Tyler's piece as it's paywalled, but with advanced AI doing all the work other than networking, what are you networking for? It seems to assume an unchanged academic job market (and maybe it will last a while).
If one should read more realistic fiction to learn about psychology, one should probably read more science fiction to attempt to think about the intersection of human psychology and AI. Science fiction by people for people has to have a level of psychological plausibility of people to be appealing to people. Not mentioned in the lists below is the Dune series, which has a minimum of 5 different examples (models?) of superintelligences, 3 human and 2 non-human.
"what are science fiction books with artificial intelligence networks?"
https://chatgpt.com/share/6832fee9-14a8-800f-b550-6ed77eb4ad71
Same question asked of google also yielded Klara and the Sun by Kazuo Ishiguro and Machines Like Me by Ian McEwan.
24 Best Artificial Intelligence Science Fiction Books
https://best-sci-fi-books.com/24-best-artificial-intelligence-science-fiction-books/
20 Must-Read Sci-Fi Novels about AI
https://bookriot.com/sci-fi-novels-about-ai/
Might be worth mentioning that the Dune society had an absolute proscription of thinking machines.
And yet in the end it was the two different AIs that were the villains and existential threat to humanity.
I'll have to take your word for it. I have only read the first two or three books -- and only really like the original Dune.
I liked the first 4 a lot, it definitely gets weirder in 5 and 6. I debated whether to read Brian Herbert's finishing of the story, but I really wanted to see where it went in the end.
Yeah, the first four novels definitely are better than Frank Herbert's last two and I think it was because the first four are intimately connected together in a broad narrative arc. Herbert then wanted to go in a different direction after "God Emperor of Dune". I also read the son's finishing of the 2nd narrative arc and was less impressed but was entertained.