AI is the ultimate DIY power tool. There is a ton of new extra utility and value that has been unlocked, but by its "home production" non-traded nature is not going to show up in the usual economic statistics. I've said similar things about modern power tools and YouTube in the past, and for DIY, AI is several qualitative tiers above mere videos.
My personal anecdotal experience is that AI tools have proven to be extremely valuable, and far in excess of what I've paid for them. I have now done dozens of formerly "technical professional services" projects, safely and effectively, for myself, in short time, that were recently completely beyond my capability on almost any timescale. The former "pay a human" prices were totally prohibitive, but now that I am augmented with powerful AI tools at negligible cost, all sorts of possibilities have opened up. I've saved thousands of dollars just on being able to do diagnostics and repairs on big-ticket items by myself.
In addition to the utility of the pride and satisfaction one feels at pulling these things off with one's own "mens et manus", as it were, there is also the avoidance of the feeling of modern principal-agent humiliation at being utterly helpless and dependent on the word of a professional who has every incentive to avoid liability-risking real-talk and to exaggerate your needs and make the bill as large as possible and who can easily get away with feeding you a bunch of malarkey about how long things take and so forth. (Before someone tries, please don't give me that stuff about Angie's List or other reputation systems or getting second opinions, I have plenty of direct experiences and arguments against the real practicality and utility of those things.)
Now, one might suspect that since the learning curves and cost of investment in human capital necessary to pull off these projects has collapsed, and that the threshold to use the AI tools effectively is not too elite, that a whole new class of AI-assisted workers would pop up to offer their services much cheaper than the existing professionals and substantially lower prices for those services.
But there seem to be some kind of inertia of transition or friction or barriers to entry or establishment of that kind of employment, so potential consumers like me just end up doing a lot more things "on their own". The amount of a service being provided is going up, but the number of workers doing that service professionally, and the prices they are charging, seems to not be changing very much. Or else, like in some fields already, there is a fairly rapid collapse in the number of humans getting paid well to do that kind of work anymore.
Forever. Well, unless and until there is some kind of global technological reversal. That's the nature of innovation*. Like the "hedonic treadmill", it's human nature to recalibrate and to stop really appreciating these benefits and to take them for granted as they become viewed as merely "normal" and "ordinary" and even "default".
*There are a few rare categories of innovation where the benefits are often temporary and do wear off, because the benefit is only relative to a moving target, not absolute. That is, it only exists so long as superiority is maintained against competitors adapting and countering with rival innovations, in arms races of offense and defense. Examples include new tactics and strategies in competitive games, market share, military weaponry and defenses, cybersecurity vs hacking, and antimicrobials like antibiotics, vaccines, anti-fungal chemicals, etc. It is also something one sees in the ephemeral benefits of institutional reforms which can provide an improvement for a time, but only before the targeted population inevitably figures out how to game the new system.
Exactly. Right now I’m not paying anything for ChatGPT, but eventually we’ll start paying either through direct fees or through ad placement. Perhaps ad placement is already steering—and perhaps diminishing—ChatGPT’s output. For example, let’s say I’m trying to learn how to take better care of my lawn, and the LLM refers me to articles by Scotts Lawncare products, which presumably might not be as helpful as learning from another source.
So here is a timely and practical example of where AI probably won’t work any better than DIY YouTube videos:
The NFL season is here. I want to watch out of market Sunday games without paying $400 for NFL Sunday Ticket. E.g. I live in FL, but want to watch the Seahawks game, instead of the games of the local FL team.
Assumptions: I have an active cable TV subscription that provides access to the apps of the Sunday NFL networks (CBS and Fox). I also have a VPN to spoof the device IP location. However, in addition to IP validation, the apps also require the validation of the device GPS location.
Task: using AI, derive a way to spoof the GPS device location that is superior to those offered on YouTube or provide another workaround to watch out of market games without having to visit the local sports bar.
You use a SDR (likely with an AD9360 chip) and synthesise and modulate the GPS waveform (using the right software) and transmit it near your device. There are many practical problems to overcome to make this happen including a risk it will interfere with others
I've been programming since 1968. I don't believe current AI can make me a better programmer. But I do believe it can make Joe Blow (or Arnold Kling!) a programmer, period, just as spreadsheets and Visual Basic did, just as word processors made a lot more people writers who gave up with the relative perfection required by typewriters, just as typewriters made more written communication and documentation possible than longhand and ink did. Heck, just as WordPress and Substack have expanded blogging. Now even schmoes like me can start a blog in five minutes! The quality is up for debate, but most jobs don't need perfessional programmers or writers.
Someone famous said progress doesn't come from making better silk stockings for Marie Antoinette, but from making more nylon stockings for all women (or Joe Namath). That's what AI, spreadsheets, typewriters, and word processors do.
Hi Arnold, thank you for this. I've skimmed your article on The New Commanding Heights, and in this article, your criteria for commanding heights is that they must be "foremost growth sectors — the ones most central to employment and consumption; the ones that, increasingly, drive our economy". In other words, "the sectors in which employment and consumption are focused, and in which growth is swiftest". The article was written in 2011. According to data from Bureau of Labor Statistics on the number of jobs added per sector, for the period of 2014-2024, "Professional, scientific, and technical services" satisfies this criteria as well. The number of jobs added in this sector (nearly 2.5 mil) was only behind private healthcare (4.5 mil), and it was higher than private education (only 0.5 mil). I wonder if this means the tech industry should now be considered a commanding height as well?
I'm with you. I used to be an actuary and carved out a niche in my company by being an expert at VBA for Excel and then gradually expanding out into building Add-ins using C#. I had a lot of fun with it, and it helped me a lot with the client-facing projects I worked on, but I was never an engineer.
That was 10 years ago.
Fast forward a decade and I am a homeschooling father of 4. Inspired by your work, I fired of VS Code and got it connected with ChatGPT and Claude and over the course of a week, I've been able to put together a new webapp that I'm using with my kids to help structure their education. I'm completely blown away by what is possible with these tools, and once again, it is really fun.
Still, at the end of the day, I think your assessment of the impact of AI is spot on. Unless what gets built for education can actually move the needle on stimulating and sustaining motivation through to a level of mastery that gets applied to solving problems in the physical world, it won't matter. And on that front, I think the Null Hypothesis still looms large.
“Unless what gets built for education can actually move the needle on stimulating and sustaining motivation through to a level of mastery that gets applied to solving problems in the physical world, it won't matter. And on that front, I think the Null Hypothesis still looms large.”
My understanding of the Null Hypothesis, as applied to “education”, is that there is no one-size-fits-all “educational intervention” that has been proven to improve measurable outcomes. It is a stringent set of requirements. If misinterpreted it could make us less optimistic about making small improvements in specific situations to benefit specific individuals.
One-size-fits all tweaks—as demanded by the Null Hypothesis—are less common than more specific and mundane improvements that apply to specific groups. So my feeling is that the Null Hypothesis doesn’t necessarily loom so large. It really applies to dreamers, economists, politicians, and voters….okay…you got me. I suppose the Null Hypothesis is more important than I realized.
I think you are underestimating the importance of Judge's point about there not being more piles of shovel-ware games on Steam. It is true that we don't need more games, but people buy them and people want to make them, so if AI did make programming much more efficient for people who couldn't really do it before we would expect to see massive amounts more. Especially considering that AI helps with the graphics and sound as well. Think of how much AI generated "Elsa Spiderman Singing Dance Pokemon Party" crap videos there are on YouTube; if programming with AI was getting easy like that we'd expect to see the equivalent trash games on Steam or the AppStore etc.
I think this is right. For example, I think there genuinely are more novels being self-published, even if most of them are low quality. Which is a clear indicator that baseline productivity has increased in novel writing.
There are lots of people churning out software to make a quick buck. If AI improves productivity, then those people would be churning out even more.
What you’re not taking into account is that is that it is taking you who I wouldn’t be surprised is in the top 5% of IQ and allow you to program like a much better programmer than you were before. Will it let someone with an IQ closer to 100 do the same? Up in the air at this point.
That is a good point, for two reasons that I can see. Programming in general still needs a general ability to see an abstract process and work through its inputs and outputs, and while an AI makes getting to those goals a lot easier, the human still needs the goals. Those goals then are the other issue: you need a human to come up with good goals that are worth pursuing, over the course of a few weeks (as in Arnold's case). If lower IQ correlates highly with short term goals of questionable value, which it rather feels like it does, then we should expect to find that AI doesn't add productivity to all people at a constant rate, and the people who need the most help programming are also the kind of people who won't do it anyway.
After the dot com bust in the late 90s, making fun of people shipping dog food was kind of the stand-in for ‘everything got out of hand’. But last week I went over to a friend’s house and he had a box of dog food sitting on his front porch. It took 20 years, but the people making fun of buying dog food on line were wrong.
My guess is that we’ll see the same pattern with generative AI. A lot of the assumptions about its benefits will turn out to be correct, but they will take longer to materialize than people expect. And it will also unlock a lot of things that no one would have predicted.
In software development, all I can say is that the number 20-30% gets thrown around a lot, and engineers where I work are extremely eager to start using it. My impression is that much of the impact is in things like testing – so not linearly improving what the normal engineer works on, but rather picking up some of the boring work, or enabling you to do things that weren’t worth doing previously. (I’ve seen this in my own work as a lawyer – I’ll drop something my team is working on that I’m curious about into an LLM just to get a better feel for it, where previously digging in more deeply was only worth it for the larger deals.)
Bill Gates wrote in 1996 “We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten.”
[He wrote this about a little thing called the Internet.]
There is every reason to believe that will be correct here in terms of the impact of AI on productivity.
Now separately, education as a “business” is a very special case - the fact that K-12 education is taxpayer funded and mostly government run being a huge difference, and the fact that undergrad education is a bundle of life stuff, signaling plus education thrown in being another - so I don’t agree with AK’s 1-1 success of AI here and success in education tie.
Certainly as defined by what K-12 public schools (and even most publicly funded private schools) and undergrad university education are, even in 10 years.
I think what you're saying about healthcare and education is spot on, but I just finished reading the book Medical Nihilism. I believe AI may make medical problems worse as much as it reduces socializing among people. As for education, I don't know, but I have a strong prior that there simply is no technological solution to make education better. The internet made similar promises, but I think it's fair to say that the median person is not more knowledgeable or better at critical thinking thanks to the internet.
I think it's hard to quantify the internet's impact on education. As for me, I'm pretty sure that without the education I got from the internet, I wouldn't have been able to leave my home country in search of a better life, nor been motivated to do so.
AI helps us learn faster than we would otherwise. It also helps us do math and engineering analysis faster. It helps us write better. It plans vacations faster and helps us shop for products online more productively. More people can now develop apps because of AI. These results may not yet be showing up in app stores, but I would expect to see increases.
If, besides education, the world mainly needs health care, doesn't that also suggest that the education the world needs should mainly be about health care?
On the cost-effective healthcare front, Peterson/KFF reported Thursday that “the evidence continues to support the finding that higher prices – as opposed to higher utilization – explain the United States’ high health spending relative to other high-income countries.” However, their study concedes:
“additional factors like administrative overhead and intensity of care delivered may also impact differences in total health expenditures between countries, the fundamental pattern of the U.S. having higher prices for healthcare services and using less care on average has persisted for many years.”
One is hard pressed to not conclude that “more cost-effective health care” is very unlikely to be achieved through changes in medical practice but rather must be achieved through addressing the US’s longstanding problems with grossly excessive health care administrative expenses, as has been widely recognized. See:
“ The incorporation of AI tools developed from these data systems both within organizations and seismically across the health care system can (1) promote transparency via payer/provider data-sharing platforms; (2) automate routine, evidence-based care to reduce ineffective, inefficient, and inconsistent medical decisions; (3) align incentives of key stakeholders by incorporating epidemiologic informatic insights and individual patient-centered value quantification to inform physician-patient decision making; (4) mitigate care delays from prior authorization and claims processing via centralized digital claims clearinghouses; (5) guide payment model evolution to accurately and transparently reflect costs of care for patients with different risk profiles; (6) harmonize quality control reporting for comparability; (7) simplify and standardize prior authorization processes to reduce administrative complexity; and (8) automate nonclinical repetitive work (credentialing, quality assurance, and so on). Adoption of these tools can eliminate $168 billion in annual administrative costs.”
Of course $168 billion is but a drop in the ocean of wasteful non-medical healthcare spending in the US, yet close monitoring of this trend going forward might unveil the policy obstacles obstructing progress here and in other areas.
Health care costs are not going to be slain by simple AI as you say, but then again, $168 billion here, $168 billion there, and pretty soon you are talking about real money :D
As long as doctors, drug companies, medical device manufacturers, hospitals, nursing homes, medical researchers, and others can earn money by lobbying or scamming the government, medical spending will be more wasteful than if people paid for their own health care. The education industry is similarly afflicted with government dominance.
Here is a ready test for AI programming: Feed the algorithm existing source code of an existing, useful software product and tell it to (a) make the code more efficient and (b) add certain capabilities.
Then test the result. Did AI produce more efficient code? Did it add the features? Did it create bugs?
A related AI test is this: Take three distinct AI systems and have one suggest ideas, another one implement those ideas and a third evaluate the creation. Run this iteratively. What happens? Does this process yield an amazing product, a meh product or a bad product?
I fully agree with comments that LLM algorithms are extremely useful for conveying information. I am unconvinced that LLM AI works or can ever work as the visionaries promise. Pattern matching is useful. But progress is made by trial and error and LLM AI is unable to detect error! How can a machine fix what it doesn't know exists?
I agree with you that for the best the gains to using AI right now probably aren’t that great. It’s for the people that are not at that level.
I’m certainly not great at it, I’m in that know enough to be dangerous level, using AI has increased what is possible for me to do, made it faster to get things done, and increased the overall quality.
It has also helped my skill because I love asking it to help explain why things are done certain ways. Saves tons of time going through old stack exchange posts for things similar to what I am seeing and even then if I had fixed it, I didn’t really learn why it was fixed.
My impression is that AI is really good for zero to one projects where you're making a brand new product. But it's significantly less helpful when you're dealing with a pre-existing very large codebase and trying to fix bugs or implement small enhancements.
Great post, especially noting the focus on health & education.
Somebody will make a successful Illustrated Primer* for learning English as a Second Language. We all know that more private money is spent learning English than any other subject. Programming an ai-tutor to personalize English will be a near term killer app. Maybe. Like voice to text, it might remain too hard to get the 99.99% accuracy desired/ required.
Substack auto-transcripts don’t seem to be getting much better the last year or so.
AI is the ultimate DIY power tool. There is a ton of new extra utility and value that has been unlocked, but by its "home production" non-traded nature is not going to show up in the usual economic statistics. I've said similar things about modern power tools and YouTube in the past, and for DIY, AI is several qualitative tiers above mere videos.
My personal anecdotal experience is that AI tools have proven to be extremely valuable, and far in excess of what I've paid for them. I have now done dozens of formerly "technical professional services" projects, safely and effectively, for myself, in short time, that were recently completely beyond my capability on almost any timescale. The former "pay a human" prices were totally prohibitive, but now that I am augmented with powerful AI tools at negligible cost, all sorts of possibilities have opened up. I've saved thousands of dollars just on being able to do diagnostics and repairs on big-ticket items by myself.
In addition to the utility of the pride and satisfaction one feels at pulling these things off with one's own "mens et manus", as it were, there is also the avoidance of the feeling of modern principal-agent humiliation at being utterly helpless and dependent on the word of a professional who has every incentive to avoid liability-risking real-talk and to exaggerate your needs and make the bill as large as possible and who can easily get away with feeding you a bunch of malarkey about how long things take and so forth. (Before someone tries, please don't give me that stuff about Angie's List or other reputation systems or getting second opinions, I have plenty of direct experiences and arguments against the real practicality and utility of those things.)
Now, one might suspect that since the learning curves and cost of investment in human capital necessary to pull off these projects has collapsed, and that the threshold to use the AI tools effectively is not too elite, that a whole new class of AI-assisted workers would pop up to offer their services much cheaper than the existing professionals and substantially lower prices for those services.
But there seem to be some kind of inertia of transition or friction or barriers to entry or establishment of that kind of employment, so potential consumers like me just end up doing a lot more things "on their own". The amount of a service being provided is going up, but the number of workers doing that service professionally, and the prices they are charging, seems to not be changing very much. Or else, like in some fields already, there is a fairly rapid collapse in the number of humans getting paid well to do that kind of work anymore.
Beautiful comment.
How long will this situation of seemingly free AI with massive benefits last?
Forever. Well, unless and until there is some kind of global technological reversal. That's the nature of innovation*. Like the "hedonic treadmill", it's human nature to recalibrate and to stop really appreciating these benefits and to take them for granted as they become viewed as merely "normal" and "ordinary" and even "default".
*There are a few rare categories of innovation where the benefits are often temporary and do wear off, because the benefit is only relative to a moving target, not absolute. That is, it only exists so long as superiority is maintained against competitors adapting and countering with rival innovations, in arms races of offense and defense. Examples include new tactics and strategies in competitive games, market share, military weaponry and defenses, cybersecurity vs hacking, and antimicrobials like antibiotics, vaccines, anti-fungal chemicals, etc. It is also something one sees in the ephemeral benefits of institutional reforms which can provide an improvement for a time, but only before the targeted population inevitably figures out how to game the new system.
Well, it seems Google search was better in its early years. Don’t you think something similar will happen with AI?
Case in point: search engines have always been at war with search engine optimizers.
Exactly. Right now I’m not paying anything for ChatGPT, but eventually we’ll start paying either through direct fees or through ad placement. Perhaps ad placement is already steering—and perhaps diminishing—ChatGPT’s output. For example, let’s say I’m trying to learn how to take better care of my lawn, and the LLM refers me to articles by Scotts Lawncare products, which presumably might not be as helpful as learning from another source.
So here is a timely and practical example of where AI probably won’t work any better than DIY YouTube videos:
The NFL season is here. I want to watch out of market Sunday games without paying $400 for NFL Sunday Ticket. E.g. I live in FL, but want to watch the Seahawks game, instead of the games of the local FL team.
Assumptions: I have an active cable TV subscription that provides access to the apps of the Sunday NFL networks (CBS and Fox). I also have a VPN to spoof the device IP location. However, in addition to IP validation, the apps also require the validation of the device GPS location.
Task: using AI, derive a way to spoof the GPS device location that is superior to those offered on YouTube or provide another workaround to watch out of market games without having to visit the local sports bar.
You use a SDR (likely with an AD9360 chip) and synthesise and modulate the GPS waveform (using the right software) and transmit it near your device. There are many practical problems to overcome to make this happen including a risk it will interfere with others
I've been programming since 1968. I don't believe current AI can make me a better programmer. But I do believe it can make Joe Blow (or Arnold Kling!) a programmer, period, just as spreadsheets and Visual Basic did, just as word processors made a lot more people writers who gave up with the relative perfection required by typewriters, just as typewriters made more written communication and documentation possible than longhand and ink did. Heck, just as WordPress and Substack have expanded blogging. Now even schmoes like me can start a blog in five minutes! The quality is up for debate, but most jobs don't need perfessional programmers or writers.
Someone famous said progress doesn't come from making better silk stockings for Marie Antoinette, but from making more nylon stockings for all women (or Joe Namath). That's what AI, spreadsheets, typewriters, and word processors do.
Hi Arnold, thank you for this. I've skimmed your article on The New Commanding Heights, and in this article, your criteria for commanding heights is that they must be "foremost growth sectors — the ones most central to employment and consumption; the ones that, increasingly, drive our economy". In other words, "the sectors in which employment and consumption are focused, and in which growth is swiftest". The article was written in 2011. According to data from Bureau of Labor Statistics on the number of jobs added per sector, for the period of 2014-2024, "Professional, scientific, and technical services" satisfies this criteria as well. The number of jobs added in this sector (nearly 2.5 mil) was only behind private healthcare (4.5 mil), and it was higher than private education (only 0.5 mil). I wonder if this means the tech industry should now be considered a commanding height as well?
https://www.bls.gov/emp/tables/employment-by-major-industry-sector.htm
I'm with you. I used to be an actuary and carved out a niche in my company by being an expert at VBA for Excel and then gradually expanding out into building Add-ins using C#. I had a lot of fun with it, and it helped me a lot with the client-facing projects I worked on, but I was never an engineer.
That was 10 years ago.
Fast forward a decade and I am a homeschooling father of 4. Inspired by your work, I fired of VS Code and got it connected with ChatGPT and Claude and over the course of a week, I've been able to put together a new webapp that I'm using with my kids to help structure their education. I'm completely blown away by what is possible with these tools, and once again, it is really fun.
Still, at the end of the day, I think your assessment of the impact of AI is spot on. Unless what gets built for education can actually move the needle on stimulating and sustaining motivation through to a level of mastery that gets applied to solving problems in the physical world, it won't matter. And on that front, I think the Null Hypothesis still looms large.
“Unless what gets built for education can actually move the needle on stimulating and sustaining motivation through to a level of mastery that gets applied to solving problems in the physical world, it won't matter. And on that front, I think the Null Hypothesis still looms large.”
My understanding of the Null Hypothesis, as applied to “education”, is that there is no one-size-fits-all “educational intervention” that has been proven to improve measurable outcomes. It is a stringent set of requirements. If misinterpreted it could make us less optimistic about making small improvements in specific situations to benefit specific individuals.
One-size-fits all tweaks—as demanded by the Null Hypothesis—are less common than more specific and mundane improvements that apply to specific groups. So my feeling is that the Null Hypothesis doesn’t necessarily loom so large. It really applies to dreamers, economists, politicians, and voters….okay…you got me. I suppose the Null Hypothesis is more important than I realized.
I think you are underestimating the importance of Judge's point about there not being more piles of shovel-ware games on Steam. It is true that we don't need more games, but people buy them and people want to make them, so if AI did make programming much more efficient for people who couldn't really do it before we would expect to see massive amounts more. Especially considering that AI helps with the graphics and sound as well. Think of how much AI generated "Elsa Spiderman Singing Dance Pokemon Party" crap videos there are on YouTube; if programming with AI was getting easy like that we'd expect to see the equivalent trash games on Steam or the AppStore etc.
I think this is right. For example, I think there genuinely are more novels being self-published, even if most of them are low quality. Which is a clear indicator that baseline productivity has increased in novel writing.
There are lots of people churning out software to make a quick buck. If AI improves productivity, then those people would be churning out even more.
What you’re not taking into account is that is that it is taking you who I wouldn’t be surprised is in the top 5% of IQ and allow you to program like a much better programmer than you were before. Will it let someone with an IQ closer to 100 do the same? Up in the air at this point.
That is a good point, for two reasons that I can see. Programming in general still needs a general ability to see an abstract process and work through its inputs and outputs, and while an AI makes getting to those goals a lot easier, the human still needs the goals. Those goals then are the other issue: you need a human to come up with good goals that are worth pursuing, over the course of a few weeks (as in Arnold's case). If lower IQ correlates highly with short term goals of questionable value, which it rather feels like it does, then we should expect to find that AI doesn't add productivity to all people at a constant rate, and the people who need the most help programming are also the kind of people who won't do it anyway.
After the dot com bust in the late 90s, making fun of people shipping dog food was kind of the stand-in for ‘everything got out of hand’. But last week I went over to a friend’s house and he had a box of dog food sitting on his front porch. It took 20 years, but the people making fun of buying dog food on line were wrong.
My guess is that we’ll see the same pattern with generative AI. A lot of the assumptions about its benefits will turn out to be correct, but they will take longer to materialize than people expect. And it will also unlock a lot of things that no one would have predicted.
In software development, all I can say is that the number 20-30% gets thrown around a lot, and engineers where I work are extremely eager to start using it. My impression is that much of the impact is in things like testing – so not linearly improving what the normal engineer works on, but rather picking up some of the boring work, or enabling you to do things that weren’t worth doing previously. (I’ve seen this in my own work as a lawyer – I’ll drop something my team is working on that I’m curious about into an LLM just to get a better feel for it, where previously digging in more deeply was only worth it for the larger deals.)
Bill Gates wrote in 1996 “We always overestimate the change that will occur in the next two years and underestimate the change that will occur in the next ten.”
[He wrote this about a little thing called the Internet.]
There is every reason to believe that will be correct here in terms of the impact of AI on productivity.
Now separately, education as a “business” is a very special case - the fact that K-12 education is taxpayer funded and mostly government run being a huge difference, and the fact that undergrad education is a bundle of life stuff, signaling plus education thrown in being another - so I don’t agree with AK’s 1-1 success of AI here and success in education tie.
Certainly as defined by what K-12 public schools (and even most publicly funded private schools) and undergrad university education are, even in 10 years.
I think what you're saying about healthcare and education is spot on, but I just finished reading the book Medical Nihilism. I believe AI may make medical problems worse as much as it reduces socializing among people. As for education, I don't know, but I have a strong prior that there simply is no technological solution to make education better. The internet made similar promises, but I think it's fair to say that the median person is not more knowledgeable or better at critical thinking thanks to the internet.
I think it's hard to quantify the internet's impact on education. As for me, I'm pretty sure that without the education I got from the internet, I wouldn't have been able to leave my home country in search of a better life, nor been motivated to do so.
The Internet makes you better at critical thinking because it gives you data to solve problems that were impractical or impossible before.
AI helps us learn faster than we would otherwise. It also helps us do math and engineering analysis faster. It helps us write better. It plans vacations faster and helps us shop for products online more productively. More people can now develop apps because of AI. These results may not yet be showing up in app stores, but I would expect to see increases.
If, besides education, the world mainly needs health care, doesn't that also suggest that the education the world needs should mainly be about health care?
We need more doctors graduating than lawyers, thus we need more med schools.
On the cost-effective healthcare front, Peterson/KFF reported Thursday that “the evidence continues to support the finding that higher prices – as opposed to higher utilization – explain the United States’ high health spending relative to other high-income countries.” However, their study concedes:
“additional factors like administrative overhead and intensity of care delivered may also impact differences in total health expenditures between countries, the fundamental pattern of the U.S. having higher prices for healthcare services and using less care on average has persisted for many years.”
( https://www.healthsystemtracker.org/chart-collection/how-do-healthcare-prices-and-use-in-the-u-s-compare-to-other-countries/ )
One is hard pressed to not conclude that “more cost-effective health care” is very unlikely to be achieved through changes in medical practice but rather must be achieved through addressing the US’s longstanding problems with grossly excessive health care administrative expenses, as has been widely recognized. See:
https://jamanetwork.com/journals/jama/article-abstract/2674671
https://www.nejm.org/doi/full/10.1056/NEJMsa022033
https://jamanetwork.com/journals/jama/fullarticle/2785479
https://www.healthaffairs.org/content/forefront/administrative-spending-contributes-excess-us-health-spending
https://www.mckinsey.com/industries/healthcare/our-insights/administrative-simplification-how-to-save-a-quarter-trillion-dollars-in-US-healthcare
https://www.hamiltonproject.org/assets/files/Cutler_PP_LO.pdf
Fortunately, it appears that relevant AI products are being brought to market around the globe. See for example: https://intellias.com/ai-healthcare-solutions/ and https://www.geneonline.com/acer-medical-unveils-aimed-system-utilizing-generative-ai-for-healthcare-documentation-at-berlin-event/ And it is increasingly recognized that AI has great potential to achieve more cost-effective healthcare administration:
“ The incorporation of AI tools developed from these data systems both within organizations and seismically across the health care system can (1) promote transparency via payer/provider data-sharing platforms; (2) automate routine, evidence-based care to reduce ineffective, inefficient, and inconsistent medical decisions; (3) align incentives of key stakeholders by incorporating epidemiologic informatic insights and individual patient-centered value quantification to inform physician-patient decision making; (4) mitigate care delays from prior authorization and claims processing via centralized digital claims clearinghouses; (5) guide payment model evolution to accurately and transparently reflect costs of care for patients with different risk profiles; (6) harmonize quality control reporting for comparability; (7) simplify and standardize prior authorization processes to reduce administrative complexity; and (8) automate nonclinical repetitive work (credentialing, quality assurance, and so on). Adoption of these tools can eliminate $168 billion in annual administrative costs.”
(https://www.sciencedirect.com/science/article/abs/pii/S0749806325002166 )
Of course $168 billion is but a drop in the ocean of wasteful non-medical healthcare spending in the US, yet close monitoring of this trend going forward might unveil the policy obstacles obstructing progress here and in other areas.
Health care costs are not going to be slain by simple AI as you say, but then again, $168 billion here, $168 billion there, and pretty soon you are talking about real money :D
As long as doctors, drug companies, medical device manufacturers, hospitals, nursing homes, medical researchers, and others can earn money by lobbying or scamming the government, medical spending will be more wasteful than if people paid for their own health care. The education industry is similarly afflicted with government dominance.
Here is a ready test for AI programming: Feed the algorithm existing source code of an existing, useful software product and tell it to (a) make the code more efficient and (b) add certain capabilities.
Then test the result. Did AI produce more efficient code? Did it add the features? Did it create bugs?
A related AI test is this: Take three distinct AI systems and have one suggest ideas, another one implement those ideas and a third evaluate the creation. Run this iteratively. What happens? Does this process yield an amazing product, a meh product or a bad product?
I fully agree with comments that LLM algorithms are extremely useful for conveying information. I am unconvinced that LLM AI works or can ever work as the visionaries promise. Pattern matching is useful. But progress is made by trial and error and LLM AI is unable to detect error! How can a machine fix what it doesn't know exists?
I agree with you that for the best the gains to using AI right now probably aren’t that great. It’s for the people that are not at that level.
I’m certainly not great at it, I’m in that know enough to be dangerous level, using AI has increased what is possible for me to do, made it faster to get things done, and increased the overall quality.
It has also helped my skill because I love asking it to help explain why things are done certain ways. Saves tons of time going through old stack exchange posts for things similar to what I am seeing and even then if I had fixed it, I didn’t really learn why it was fixed.
Most debates on AI stay at the level of prompts and productivity.
But the real difference is not in prompts — it is in orientation.
AI is never “smart” or “stupid.”
It mirrors the epistemic stance you bring:
• Coherence before knowledge.
• Potentiality before performance.
• Becoming before answers.
This is the epistemic key to AI: using it not as a tool of automation, but as an infrastructure of becoming.
I have unfolded this here:
https://substack.com/profile/110168113-leon-tsvasman-epistemic-core/note/c-154706867
My impression is that AI is really good for zero to one projects where you're making a brand new product. But it's significantly less helpful when you're dealing with a pre-existing very large codebase and trying to fix bugs or implement small enhancements.
Great post, especially noting the focus on health & education.
Somebody will make a successful Illustrated Primer* for learning English as a Second Language. We all know that more private money is spent learning English than any other subject. Programming an ai-tutor to personalize English will be a near term killer app. Maybe. Like voice to text, it might remain too hard to get the 99.99% accuracy desired/ required.
Substack auto-transcripts don’t seem to be getting much better the last year or so.
*Kind of limited Young Lady’s Illustrated Primer.