"But at least since the advent of television, pundits have predicted dramatic improvement in education. So far, it has not happened."
That's because the pundits are all about the supply side. They don't even bother to think about demand. But the terrible, horrible, no good, very bad, truth is that most young people don't have much inherent interest in most of what they are supposed to learn to be considered "educated".
So they don't rush to take advantage of all these new possibilities. It doesn't really matter much if something you're not interested in is presented in a textbook or on a computer screen, or in an answer from an LLM. You are probably not going to make a great effort to engage with it, to think about it, to try to make sense of it. You'll probably do what you've done since grades began to matter. You "memorize and forget": pack enough into short-term memory to get an acceptable mark on a test and then allow the knowledge to "decay".
Of course, some students do care about some of the curriculum. That includes most of the people who comment here. LLMs might have made a difference to them.
That's a fair point. We tend to think of education as something performed on kids, like a pedagogical haircut, ignoring that real motivation is the limiting factor. People learn lots of things by watching YouTube videos, the question is what they are learning.
Which is ironic in a very sad way, because many a classroom sports the poster, "Education is not the filling of pail but the lighting of a fire." Teachers try hard to light a fire. But nobody wants to admit how wet most of the wood is.
It’s funny because anyone who’s the product of a big metro school district - knows exactly that.
They used to say public school was a good proving ground because you would learn who your fellow citizens were, and how to get along with them.
Even now I can’t exactly dismiss this view. In fact, if more of our “thought leaders” had tried it - they might have better judgment about policy.
However, the thing that gets lost - the humanity of the situation - is just that: never mind the fire - everyone deserves to have their little pail filled up. People deserve a shared civic sense, a secure place to imbibe the rules of conduct before life hits hard; those too readily dismissed “3 R’s”, a nice atmosphere for their childhoods to unfold, and to recall; filled with stories given that most people will not read anything of the quality of the great children’s stories, later on; and even a glimpse of what’s on offer that they will not reach.
Boils down to motivation. I'm not meaning motivation to try to get smarter necessarily, but motivation to simply learn, a curiosity about the world, if you will. Kid doesn't feel like learning, he/she won't.
Many young people are intensely interested in learning. They want to know about music, sports, video, games, pop culture of all kinds. They want to know about how they can succeed in the world, what skills they need, what jobs will be open to them. They want to know about where they stand with adults and with their peers, including potential friends and potential romantic partners, why and how to make things better in their everyday life.
I appreciate what you’re saying and am confident it is true, many kids do want to learn. The population of kids who don’t is sizable, too, unfortunately. That group won’t benefit from any curriculum, methodology or technology and, as great as AI might be for enabling learning, it won’t help them. I applaud the teachers (some are relatives) who are given the Sisyphean task of helping all kids learn.
I’ve found that LLMs are fantastic for rounding out and extending ideas that you already have. I’m using the most advanced o3 model to start my new company. I have an idea I want to turn into a company, o3 then helps me get a lot of the thoroughness work and extensions of the core idea done. It researches aspects of the idea I haven’t thought of with deep research (thoroughness) and it turns my idea into one pagers or marketing materials or it developed an assessment based on the idea, etc. That’s saving me hundreds of hours and a lot of cost. It’s a force multiplier but when I ask it for truly creative ideas of the type I’m founding my new company on, it is underwhelming.
I find a similar result trying to use (admittedly less advanced) AI for work purposes. AI is really good at the shape or form of documents, but crap at content. So if I tell it to write a project plan for X type of project with the following list of points and include these people in this list of roles, it cranks out a pretty good multi page project plan document and I just need to tweak around the edges. Likewise "write me a nice email telling this person they are bad and wrong and should kill themselves based on the following points, like you were from HR" comes up with some really good ones that would take me a few minutes to really say nicely.
On the other hand, asking for specific information while providing the shape or form does not work, basically just is asking for hallucinations.
It does seem to me that will have implications for how well AI can serve as a teacher as well. A good teacher needs to get the delivery method (form) correct, but the content also has to be accurate. I fear AI will continue to struggle with the latter while delivering things that feel a lot like a good educational experience but is just conveying a lot of drivel.
I think this gets to a major misunderstanding of the value add of a college (if it exists). The value added is not the teaching of the material itself. The value comes because 1) students don’t know what they don’t know, and 2) most people need a commitment device to actually learn something, especially if it’s not immediately obvious where the value is in learning the thing.
So I don’t think a student can exclusively use AI to acquire the equivalent of a college degree because they don’t really know what to ask the AI. They need someone with more experience to help them organize a course of study.
Also, peer pressure does wonders for motivation. It’s just more fun to learn with other people than by yourself.
Like Arnold, I’m somewhat skeptical of vast productivity gains in education based on past experience. But I sincerely hope I’m wrong. Such gains would be truly wonderful.
The question is why we even care about the utility of AI for education.
If an AI can educate a human, it can substitute for the educated human.
After all, isn't the idea here for it to be a superior substitute for a human teacher?
Human education like biological human maturation in general tends to be very slow and costly and mostly you are making a person able to know and do what a lot of others are able to do. The marginal cost of making another stays high. It was only ever worth it as an investment because there was no superior way to expand a particularly-skilled labor supply. You can't copy-paste a human mind yet, so the learning has to happen from scratch with each new human student, over and over.
But with software as adequate substitute, you can make infinite copies immediately at negligible cost. The learning may be expensive, but it only has to happen one time, then can be scaled indefinitely, only limited by market demand.
So, when every car needed a human driver, then every new driver had to go through the learning process of being a student driver. But the fully- developed Waymo driver can just be copied onto millions or billions of cars. While it seems plausible and might even be true, a conversation about how the waymo driver algorithm would be a better way to teach humans to drive and we had all better mentally shift to the robo-driving-instructor reality as soon as possible is still absurd because it misses the larger point. The waymo driver algorithm isn't just a good teacher of human student drivers, it represents the eventual obsolescence and extinction of human student drivers.
It depends on how substitutable vs complementary humans and AI turn out to be. My guess is that they will be more complementary than you suggest. But we shall see.
I don't think that at all - it very much goes against the trend of history and the economic incentives (the biggest profit opportunities are going to be replacement of expensive human labor with a solution that can be very cheaply scaled) and it's not even conceptually obvious how to measure it. Consider the tractor. Before the tractor, it took ten men to work a field of given area. Afterwards, one man, as driver of the tractor. Now, is the tractor a substitute for 9 men, or a complementary augmentation to one man (who is now exercising a different skill), or some combination? You can try to compare the coefficients and exponents in the cost
-optimal capital-labor production functions while solving for macro-equilibrium and ... it will turn out that framing the question as if it's a choice between being more like one term than the other isn't actually helpful in producing any insight. Just like with the tractor, to the extent AI is a substitute, there's no point in using it to teach humans anything. To the extent it's a compliment, it will allow for levels of production that can satisfy an entire global market cheaply with only a tiny amount of either very cheap or very elite labor.
They will be especially complementary to the smart folk able to use many— more the very small team/one guy using multi-ai to do what 50 or 100 do now. Especially replacing all the workers doing their work digitally.
On the creative side, we’ll see huge amounts of ai and ai-assisted writing. With humans like Arnold to link to the best/ most thoughtful or thought provoking.
Coincidence? The dang google AI can't help me remember that story where robots come and take over the dude's appliance store, and eventually take away the little girl's violin, and cooking duty from the wife (who's happy about this, at first) and meanwhile a mysterious old guy moves in the garage apartment and is working on a project which turns out to be, shutting down the power source for these robots, which he'd invented.
Tried with ChatGPT 4.0 and got "...could you provide more details? For instance, do you recall if the story was part of a larger collection, the approximate time period when you read it, or any specific character names? Any additional information would help in accurately identifying the story you're referring to."
Re higher ed, there's a few questions to separate here. What do students want from a professor? How is college as a whole supposed to help them build skills? What are they actually getting, and can an AI provide that?
What students want from a professor is pretty obvious from the course evaluation comments they write up. They want Robin Williams from Dead Poets' Society. An AI isn't going to provide that, because so much of it is social in a sense that requires an embodied teacher.
How are they supposed to build skills? Part of how the system is supposed to work is by creating incentives for them to practice and consequences if they don't. (This is why people worry about cheating.) An AI can't provide that aspect of it, although it can do a lot of other stuff.
What do they actually get? My priors are very fuzzy on this at this point. What they get that's rigorously measurable is not very much. Bryan Caplan yada yada. But the people who work on measuring this stuff are usually not very good researchers, and/or biased (I adamantly include Caplan here), and it seems like something that might be tough to rigorously measure. Common sense and anecdotal experience suggest that they get a lot of intellectual growth. I am not sure what to think.
My favorite analogy to the situation with LLMs goes like this: Imagine a whaling vessel where the captain and crew know everything there is to know about the ship itself. But they've never been around Cape Horn, never sailed in the Pacific Ocean, and don't know much of anything about whales, though they do know that they're mammals, not fish. The captain's looking for financing for a whaling expedition. How much would you be willing to invest on such a voyage.
Late secondary and early postsecondary (in the US, say, High School through Undergrad) has long been understood by the intellectual class to be mostly babysitting. There's nothing that a smart, motivated student couldn't learn twice as thoroughly in half the time with self-directed study (or at least with a bit of professional guidance individually for 1/10 of the study time and in small groups for 1/5).
The problem is that almost no high school students, and few undergrads, are actually able to carve out the time, attention, discipline, and topic choice to actually do so, without a lot of supervision.
Current AI does not change this, except by making the difference between the top few percent (maybe up to 5% of all school-aged late-childhood and early-adulthood humans. maybe.) and the median much more stark. The effort and understanding needed to use the AI to learn things is not common, and the discipline needed to do it without grades as a motivator even less so.
tl;dr - mass schooling is mostly babysitting and signaling, with a fair bit of social assortation thrown in. AI doesn't help with any of these things.
I was in college only 3 years, and my emptyheadedness combined with the competition inherent in running about among tables where you tried to sign up for popular courses, resulted in a very scattershot curriculum. (This was true liberal arts days, nothing like now.) While I was there they changed the class-selecting to a phone-in system but I don’t recall that it made it any simpler.
I don’t doubt that really good students had an “in” or at least more executive functioning to chart out their schoolwork.
So one semester found me, who could not have ordered the major events of world history, and with only a cursory knowledge of our own - enrolled in “The Cuban Revolution”. I recall even my boyfriend raising an eyebrow at this use of time.
Of course the professor was de facto sympathetic to said, but not in any very passionate way, so it was not even a good schooling in Marxism.
All I vaguely remember is Fidel and co. sleeping somewhere different in the mountains every night. I’m not sure guerrilla warfare makes for an especially interesting narrative .
We learned about the wonderful, uh, neonatal outcomes ushered in by the aforementioned. Meliorism managed!
I was asked to write a paper about US involvement in a coup in Brazil. That was kind of fun because I got to handle state department cables and such out of a box in the presidential library which true to form I admired as groovy objects, with zero context, little interest in, or understanding of their contents. (Truly couldn’t have cared less.)
It occurs to me it would be interesting to ask the AI what happened in Brazil. So I would “know” after all this time lol. (I’m truly not sure if it would be possible to make me care, but I’ll leave that as “open”.)
Like probably all here, I turned out to love history and have now read a couple hundred books often on internet-derived recommendation - so I’m able to see what a waste my education was, more clearly. And thus not unsympathetic to the idea of alternate learning, although I think in my case timing was more important.
The stumbling block for me is the AI-generated prose. Like reading a brochure, or a book report from the marketing team. Just can’t hack it. Can anyone?
Can we at least stipulate amid the hoopla, that if you don’t know what you don’t know, and have no conception of why you should know anything - AI will serve you no better than did my old slacker of a Marxist professor?
Note: the only class I learned anything in during my short foray into higher education, was botany of the regional flora, which I loved (objects!) and in which I got a - no joke - hard-earned C. And which was conducted entirely outdoors.
The first place we’ll see real educational improvement, if we see it, is in English Language learning, for non-native speakers. We’ve been seeing lots of ai/ algo learning attempts, but not yet so much reproducible. There is no subject in the world with more paying customers, wanting to improve enough to pay for instruction.
But learning another language, like most learning, is hard. Unlike math, language learning doesn’t have to be so abstract, but for we without the Learning Language Talent, it takes a lot of repetition & practice. Yet mistakes are fairly easy for an ai to find.
Until I hear of some significantly successful ESL (2nd language) ai based edu course, I’m not going to be at all excited. Tyler’s ideas about using ai to learn about ai,and other things, seems pretty good.
Since colleges long ago reduced job-useful edu in non STEM subjects, the social & networking aspects, including partying (especially?), dominate any edu reform for more knowledge.
“The right problem, according to Robbins, is to figure out what the human professor can teach that the student could not learn for himself or herself using AI.”
Or what purpose the professor has besides “professing.” It seems to me that generative AI, not unlike asynchronous online education, is excellent for self-motivated learners, but may not in itself have the structure needed for most learners.
That said, it seems to me that well-used AI and async course materials complement some sort of human labor, but it is far from obvious that that human labor (a) needs to be a professor in the PhD-possessing sense or (b) warrants compensation similar to what most teaching professors currently earn (present company included, sadly).
Meanwhile, I am just counting down the months until Sentient AI.
As far as LLMs, I wish someone would form a research center using them to reconstruct and decode ancient or lost languages. Maybe by analyzing linguistic patterns from known texts and old films.
"I think that LeCun would say that the multimodal capabilities that have been added to chatbots are mere “hacks.” They will not generalize to enable the models to solve real-world problems."
When generating images with Grok or ChatGPT, it is painfully obvious that the model you're talking to is not the one that generates the images.
On the other hand, when I upload an image to Claude, it is very sensitive and precise in its description of what it "sees." The visual modality seems more integral with Claude than it does with Grok or ChatGPT, but Claude does not generate images. It only reads and describes them.
First, in his question to Strain that elicited the quote, Pethokoukis references a National Academies of Science report which I believe to be this one: https://nap.nationalacademies.org/resource/27644/interactive/#resources on which Strain served on the Committee on Automation and the US Workforce. Chapter 5 on education is but 10 pages long (https://nap.nationalacademies.org/download/27644 ) and long on assertions of potential and assumptions about the continued relevance of the education credentialing industry as gatekeeper to employment but short on actual performance data. It may be worth taking a look at just to get a sense of the weakness of the foundation justifying the headlong charge into universal mass adaptation of AI in the classroom.
To be sure, if people in the US are not desperate for education reform, they ought to be. As documented by the USA’s below average and declining measures of adult competencies in the Program for International Assessment of Adult Competencies (PIAAC) (https://nces.ed.gov/surveys/piaac/2023/national_results.asp ) that parallel an overall rise in the quantities of units of education certification outputs dispensed, the US education system must be considered a dangerous failure.
Per the PIAAC link: “Between 2017 and 2023, there were increases in the percentages of adults performing at the lowest proficiency level (Level 1 or below) in both literacy and numeracy: in literacy this percentage increased from 19 to 28 percent and in numeracy from 29 to 34 percent. The percentage of U.S. adults performing at the lowest level in adaptive problem solving in 2023 was 32 percent.” This while “Between 2017 and 2023, the percentage of U.S. adults with less than a high school education increased from 12 to 13 percent, and the percentage with more than a high school education increased from 41 to 43 percent. The percentage of U.S. adults with a high school education decreased over the same time period (from 47 to 44 percent).” And the adults with a more than a high school education who scored at proficiency level 1 or below in literacy went from 6% to 13% in literacy and 12% to 16% in numeracy (figure 5b) while overall: “Between 2017 and 2023, the average scores for U.S. adults in literacy decreased across all levels of educational attainment: from 238 to 213 for adults with less than a high school education; from 259 to 246 for adults with a high school education; and from 294 to 287 for adults with more than a high school education. In numeracy, average scores decreased between 2017 and 2023 for adults with less than a high school education (from 213 to 199) and for adults with a high school education (from 242 to 232).”
At any rate, for parents and grandparents, the education of our descendants is far too an important matter to leave to public policy. And whatever else the null hypothesis might imply, it has nothing to say about whether an individual parent can or cannot personally improve the efficacy of their parenting as it relates to their children’s learning. One suspects that in the long run, the populist revolt from below in parents taking responsibility for their own children’s learning outcomes will have far more to do with their children’s success in life than whether or not the child is sat down in some school somewhere with an AI (https://kpmg.com/ca/en/home/media/press-releases/2024/10/students-using-gen-ai-say-they-are-not-learning-as-much.html ). What one might find helpful in promoting a positive outlook and hope inducing is the fervor with which so many have taken up the challenge of achieving real progress. Just a tiny few examples of the many, many recent inspiring substacks: https://romatermini.substack.com/p/the-subversive-art-of-poetry (Subersive Art of Memorizing Poetry), https://emilyedlynn.substack.com/p/autonomy-anything-goes-916 (Autonomy, Does Anything Go?), https://substack.com/home/post/p-157472908 (3 Ways to Include Kids in Gardening).
"But at least since the advent of television, pundits have predicted dramatic improvement in education. So far, it has not happened."
That's because the pundits are all about the supply side. They don't even bother to think about demand. But the terrible, horrible, no good, very bad, truth is that most young people don't have much inherent interest in most of what they are supposed to learn to be considered "educated".
So they don't rush to take advantage of all these new possibilities. It doesn't really matter much if something you're not interested in is presented in a textbook or on a computer screen, or in an answer from an LLM. You are probably not going to make a great effort to engage with it, to think about it, to try to make sense of it. You'll probably do what you've done since grades began to matter. You "memorize and forget": pack enough into short-term memory to get an acceptable mark on a test and then allow the knowledge to "decay".
Of course, some students do care about some of the curriculum. That includes most of the people who comment here. LLMs might have made a difference to them.
That's a fair point. We tend to think of education as something performed on kids, like a pedagogical haircut, ignoring that real motivation is the limiting factor. People learn lots of things by watching YouTube videos, the question is what they are learning.
I like that line, "We tend to think of education as something performed on kids". Very true. Very profound.
That way of thinking misleads us very badly.
"That way of thinking misleads us very badly."
Which is ironic in a very sad way, because many a classroom sports the poster, "Education is not the filling of pail but the lighting of a fire." Teachers try hard to light a fire. But nobody wants to admit how wet most of the wood is.
https://www.google.com/search?q=education+is+not+the+filling+of+a+pail&sca_esv=650e174fad7411c3&hl=en&source=hp&biw=1280&bih=559&ei=Nq24Z-vkDMmgptQPyuaIgAo&iflsig=ACkRmUkAAAAAZ7i7RpuHfhZ7bvLhruXEj2Dku27jR3ex&oq=Education+is+not+&gs_lp=EgNpbWciEUVkdWNhdGlvbiBpcyBub3QgKgIIATIFEAAYgAQyBRAAGIAEMgUQABiABDIFEAAYgAQyBRAAGIAEMgUQABiABDIFEAAYgAQyBRAAGIAEMgUQABiABDIFEAAYgARIsTpQAFiIG3AAeACQAQCYAWagAe8IqgEEMTYuMbgBAcgBAPgBAYoCC2d3cy13aXotaW1nmAIRoAK_CcICCxAAGIAEGLEDGIMBwgIIEAAYgAQYsQPCAg4QABiABBixAxiDARiKBcICCxAAGIAEGLEDGIoFmAMAkgcEMTYuMaAHilw&sclient=img&udm=2
It’s funny because anyone who’s the product of a big metro school district - knows exactly that.
They used to say public school was a good proving ground because you would learn who your fellow citizens were, and how to get along with them.
Even now I can’t exactly dismiss this view. In fact, if more of our “thought leaders” had tried it - they might have better judgment about policy.
However, the thing that gets lost - the humanity of the situation - is just that: never mind the fire - everyone deserves to have their little pail filled up. People deserve a shared civic sense, a secure place to imbibe the rules of conduct before life hits hard; those too readily dismissed “3 R’s”, a nice atmosphere for their childhoods to unfold, and to recall; filled with stories given that most people will not read anything of the quality of the great children’s stories, later on; and even a glimpse of what’s on offer that they will not reach.
Boils down to motivation. I'm not meaning motivation to try to get smarter necessarily, but motivation to simply learn, a curiosity about the world, if you will. Kid doesn't feel like learning, he/she won't.
Many young people are intensely interested in learning. They want to know about music, sports, video, games, pop culture of all kinds. They want to know about how they can succeed in the world, what skills they need, what jobs will be open to them. They want to know about where they stand with adults and with their peers, including potential friends and potential romantic partners, why and how to make things better in their everyday life.
Most of this is banished from the curriculum.
Is it “banished” because the young can do better learning these topics on their own?
The above is banished because it is not considered "education".
I appreciate what you’re saying and am confident it is true, many kids do want to learn. The population of kids who don’t is sizable, too, unfortunately. That group won’t benefit from any curriculum, methodology or technology and, as great as AI might be for enabling learning, it won’t help them. I applaud the teachers (some are relatives) who are given the Sisyphean task of helping all kids learn.
I’ve found that LLMs are fantastic for rounding out and extending ideas that you already have. I’m using the most advanced o3 model to start my new company. I have an idea I want to turn into a company, o3 then helps me get a lot of the thoroughness work and extensions of the core idea done. It researches aspects of the idea I haven’t thought of with deep research (thoroughness) and it turns my idea into one pagers or marketing materials or it developed an assessment based on the idea, etc. That’s saving me hundreds of hours and a lot of cost. It’s a force multiplier but when I ask it for truly creative ideas of the type I’m founding my new company on, it is underwhelming.
I find a similar result trying to use (admittedly less advanced) AI for work purposes. AI is really good at the shape or form of documents, but crap at content. So if I tell it to write a project plan for X type of project with the following list of points and include these people in this list of roles, it cranks out a pretty good multi page project plan document and I just need to tweak around the edges. Likewise "write me a nice email telling this person they are bad and wrong and should kill themselves based on the following points, like you were from HR" comes up with some really good ones that would take me a few minutes to really say nicely.
On the other hand, asking for specific information while providing the shape or form does not work, basically just is asking for hallucinations.
It does seem to me that will have implications for how well AI can serve as a teacher as well. A good teacher needs to get the delivery method (form) correct, but the content also has to be accurate. I fear AI will continue to struggle with the latter while delivering things that feel a lot like a good educational experience but is just conveying a lot of drivel.
I think this gets to a major misunderstanding of the value add of a college (if it exists). The value added is not the teaching of the material itself. The value comes because 1) students don’t know what they don’t know, and 2) most people need a commitment device to actually learn something, especially if it’s not immediately obvious where the value is in learning the thing.
So I don’t think a student can exclusively use AI to acquire the equivalent of a college degree because they don’t really know what to ask the AI. They need someone with more experience to help them organize a course of study.
Also, peer pressure does wonders for motivation. It’s just more fun to learn with other people than by yourself.
Like Arnold, I’m somewhat skeptical of vast productivity gains in education based on past experience. But I sincerely hope I’m wrong. Such gains would be truly wonderful.
The question is why we even care about the utility of AI for education.
If an AI can educate a human, it can substitute for the educated human.
After all, isn't the idea here for it to be a superior substitute for a human teacher?
Human education like biological human maturation in general tends to be very slow and costly and mostly you are making a person able to know and do what a lot of others are able to do. The marginal cost of making another stays high. It was only ever worth it as an investment because there was no superior way to expand a particularly-skilled labor supply. You can't copy-paste a human mind yet, so the learning has to happen from scratch with each new human student, over and over.
But with software as adequate substitute, you can make infinite copies immediately at negligible cost. The learning may be expensive, but it only has to happen one time, then can be scaled indefinitely, only limited by market demand.
So, when every car needed a human driver, then every new driver had to go through the learning process of being a student driver. But the fully- developed Waymo driver can just be copied onto millions or billions of cars. While it seems plausible and might even be true, a conversation about how the waymo driver algorithm would be a better way to teach humans to drive and we had all better mentally shift to the robo-driving-instructor reality as soon as possible is still absurd because it misses the larger point. The waymo driver algorithm isn't just a good teacher of human student drivers, it represents the eventual obsolescence and extinction of human student drivers.
Sadly, sort of, true. Robot Morlocks with human Eloi? Or materialistic lives as empty as this of spoiled rich youth of many stories.
What is the meaning of life?
I think the hot market for human jobs is going to be wiping asses, for those too young to do it themselves, and for those too old to do it themselves.
Actually, that's just a metaphor, or maybe a synecdoche.
It depends on how substitutable vs complementary humans and AI turn out to be. My guess is that they will be more complementary than you suggest. But we shall see.
I don't think that at all - it very much goes against the trend of history and the economic incentives (the biggest profit opportunities are going to be replacement of expensive human labor with a solution that can be very cheaply scaled) and it's not even conceptually obvious how to measure it. Consider the tractor. Before the tractor, it took ten men to work a field of given area. Afterwards, one man, as driver of the tractor. Now, is the tractor a substitute for 9 men, or a complementary augmentation to one man (who is now exercising a different skill), or some combination? You can try to compare the coefficients and exponents in the cost
-optimal capital-labor production functions while solving for macro-equilibrium and ... it will turn out that framing the question as if it's a choice between being more like one term than the other isn't actually helpful in producing any insight. Just like with the tractor, to the extent AI is a substitute, there's no point in using it to teach humans anything. To the extent it's a compliment, it will allow for levels of production that can satisfy an entire global market cheaply with only a tiny amount of either very cheap or very elite labor.
They will be especially complementary to the smart folk able to use many— more the very small team/one guy using multi-ai to do what 50 or 100 do now. Especially replacing all the workers doing their work digitally.
On the creative side, we’ll see huge amounts of ai and ai-assisted writing. With humans like Arnold to link to the best/ most thoughtful or thought provoking.
Coincidence? The dang google AI can't help me remember that story where robots come and take over the dude's appliance store, and eventually take away the little girl's violin, and cooking duty from the wife (who's happy about this, at first) and meanwhile a mysterious old guy moves in the garage apartment and is working on a project which turns out to be, shutting down the power source for these robots, which he'd invented.
Anyone remember the title of this story?
Tried with ChatGPT 4.0 and got "...could you provide more details? For instance, do you recall if the story was part of a larger collection, the approximate time period when you read it, or any specific character names? Any additional information would help in accurately identifying the story you're referring to."
Finally summoned it with the right incantation to Google: the story is from 1947 - “With Folded Hands”.
Well, the old guy was not successful. The machines got wind of his project.
It was a sci fi story so characterization was minimal. I believe it’s a famous story though, or I wouldn’t have read it, not wading into sci-fi much.
Wow. Never looked at it that way before.
Re higher ed, there's a few questions to separate here. What do students want from a professor? How is college as a whole supposed to help them build skills? What are they actually getting, and can an AI provide that?
What students want from a professor is pretty obvious from the course evaluation comments they write up. They want Robin Williams from Dead Poets' Society. An AI isn't going to provide that, because so much of it is social in a sense that requires an embodied teacher.
How are they supposed to build skills? Part of how the system is supposed to work is by creating incentives for them to practice and consequences if they don't. (This is why people worry about cheating.) An AI can't provide that aspect of it, although it can do a lot of other stuff.
What do they actually get? My priors are very fuzzy on this at this point. What they get that's rigorously measurable is not very much. Bryan Caplan yada yada. But the people who work on measuring this stuff are usually not very good researchers, and/or biased (I adamantly include Caplan here), and it seems like something that might be tough to rigorously measure. Common sense and anecdotal experience suggest that they get a lot of intellectual growth. I am not sure what to think.
My favorite analogy to the situation with LLMs goes like this: Imagine a whaling vessel where the captain and crew know everything there is to know about the ship itself. But they've never been around Cape Horn, never sailed in the Pacific Ocean, and don't know much of anything about whales, though they do know that they're mammals, not fish. The captain's looking for financing for a whaling expedition. How much would you be willing to invest on such a voyage.
Late secondary and early postsecondary (in the US, say, High School through Undergrad) has long been understood by the intellectual class to be mostly babysitting. There's nothing that a smart, motivated student couldn't learn twice as thoroughly in half the time with self-directed study (or at least with a bit of professional guidance individually for 1/10 of the study time and in small groups for 1/5).
The problem is that almost no high school students, and few undergrads, are actually able to carve out the time, attention, discipline, and topic choice to actually do so, without a lot of supervision.
Current AI does not change this, except by making the difference between the top few percent (maybe up to 5% of all school-aged late-childhood and early-adulthood humans. maybe.) and the median much more stark. The effort and understanding needed to use the AI to learn things is not common, and the discipline needed to do it without grades as a motivator even less so.
tl;dr - mass schooling is mostly babysitting and signaling, with a fair bit of social assortation thrown in. AI doesn't help with any of these things.
We see AI everywhere but in the productivity statistics.
I was in college only 3 years, and my emptyheadedness combined with the competition inherent in running about among tables where you tried to sign up for popular courses, resulted in a very scattershot curriculum. (This was true liberal arts days, nothing like now.) While I was there they changed the class-selecting to a phone-in system but I don’t recall that it made it any simpler.
I don’t doubt that really good students had an “in” or at least more executive functioning to chart out their schoolwork.
So one semester found me, who could not have ordered the major events of world history, and with only a cursory knowledge of our own - enrolled in “The Cuban Revolution”. I recall even my boyfriend raising an eyebrow at this use of time.
Of course the professor was de facto sympathetic to said, but not in any very passionate way, so it was not even a good schooling in Marxism.
All I vaguely remember is Fidel and co. sleeping somewhere different in the mountains every night. I’m not sure guerrilla warfare makes for an especially interesting narrative .
We learned about the wonderful, uh, neonatal outcomes ushered in by the aforementioned. Meliorism managed!
I was asked to write a paper about US involvement in a coup in Brazil. That was kind of fun because I got to handle state department cables and such out of a box in the presidential library which true to form I admired as groovy objects, with zero context, little interest in, or understanding of their contents. (Truly couldn’t have cared less.)
It occurs to me it would be interesting to ask the AI what happened in Brazil. So I would “know” after all this time lol. (I’m truly not sure if it would be possible to make me care, but I’ll leave that as “open”.)
Like probably all here, I turned out to love history and have now read a couple hundred books often on internet-derived recommendation - so I’m able to see what a waste my education was, more clearly. And thus not unsympathetic to the idea of alternate learning, although I think in my case timing was more important.
The stumbling block for me is the AI-generated prose. Like reading a brochure, or a book report from the marketing team. Just can’t hack it. Can anyone?
Can we at least stipulate amid the hoopla, that if you don’t know what you don’t know, and have no conception of why you should know anything - AI will serve you no better than did my old slacker of a Marxist professor?
Note: the only class I learned anything in during my short foray into higher education, was botany of the regional flora, which I loved (objects!) and in which I got a - no joke - hard-earned C. And which was conducted entirely outdoors.
The first place we’ll see real educational improvement, if we see it, is in English Language learning, for non-native speakers. We’ve been seeing lots of ai/ algo learning attempts, but not yet so much reproducible. There is no subject in the world with more paying customers, wanting to improve enough to pay for instruction.
But learning another language, like most learning, is hard. Unlike math, language learning doesn’t have to be so abstract, but for we without the Learning Language Talent, it takes a lot of repetition & practice. Yet mistakes are fairly easy for an ai to find.
Until I hear of some significantly successful ESL (2nd language) ai based edu course, I’m not going to be at all excited. Tyler’s ideas about using ai to learn about ai,and other things, seems pretty good.
Since colleges long ago reduced job-useful edu in non STEM subjects, the social & networking aspects, including partying (especially?), dominate any edu reform for more knowledge.
This week I became an AI optimist thanks to Arnold Kling and Tyler Cowen and others.
“The right problem, according to Robbins, is to figure out what the human professor can teach that the student could not learn for himself or herself using AI.”
Or what purpose the professor has besides “professing.” It seems to me that generative AI, not unlike asynchronous online education, is excellent for self-motivated learners, but may not in itself have the structure needed for most learners.
That said, it seems to me that well-used AI and async course materials complement some sort of human labor, but it is far from obvious that that human labor (a) needs to be a professor in the PhD-possessing sense or (b) warrants compensation similar to what most teaching professors currently earn (present company included, sadly).
Meanwhile, I am just counting down the months until Sentient AI.
As far as LLMs, I wish someone would form a research center using them to reconstruct and decode ancient or lost languages. Maybe by analyzing linguistic patterns from known texts and old films.
"I think that LeCun would say that the multimodal capabilities that have been added to chatbots are mere “hacks.” They will not generalize to enable the models to solve real-world problems."
When generating images with Grok or ChatGPT, it is painfully obvious that the model you're talking to is not the one that generates the images.
On the other hand, when I upload an image to Claude, it is very sensitive and precise in its description of what it "sees." The visual modality seems more integral with Claude than it does with Grok or ChatGPT, but Claude does not generate images. It only reads and describes them.
Some thought on the Strain quote.
First, in his question to Strain that elicited the quote, Pethokoukis references a National Academies of Science report which I believe to be this one: https://nap.nationalacademies.org/resource/27644/interactive/#resources on which Strain served on the Committee on Automation and the US Workforce. Chapter 5 on education is but 10 pages long (https://nap.nationalacademies.org/download/27644 ) and long on assertions of potential and assumptions about the continued relevance of the education credentialing industry as gatekeeper to employment but short on actual performance data. It may be worth taking a look at just to get a sense of the weakness of the foundation justifying the headlong charge into universal mass adaptation of AI in the classroom.
To be sure, if people in the US are not desperate for education reform, they ought to be. As documented by the USA’s below average and declining measures of adult competencies in the Program for International Assessment of Adult Competencies (PIAAC) (https://nces.ed.gov/surveys/piaac/2023/national_results.asp ) that parallel an overall rise in the quantities of units of education certification outputs dispensed, the US education system must be considered a dangerous failure.
Per the PIAAC link: “Between 2017 and 2023, there were increases in the percentages of adults performing at the lowest proficiency level (Level 1 or below) in both literacy and numeracy: in literacy this percentage increased from 19 to 28 percent and in numeracy from 29 to 34 percent. The percentage of U.S. adults performing at the lowest level in adaptive problem solving in 2023 was 32 percent.” This while “Between 2017 and 2023, the percentage of U.S. adults with less than a high school education increased from 12 to 13 percent, and the percentage with more than a high school education increased from 41 to 43 percent. The percentage of U.S. adults with a high school education decreased over the same time period (from 47 to 44 percent).” And the adults with a more than a high school education who scored at proficiency level 1 or below in literacy went from 6% to 13% in literacy and 12% to 16% in numeracy (figure 5b) while overall: “Between 2017 and 2023, the average scores for U.S. adults in literacy decreased across all levels of educational attainment: from 238 to 213 for adults with less than a high school education; from 259 to 246 for adults with a high school education; and from 294 to 287 for adults with more than a high school education. In numeracy, average scores decreased between 2017 and 2023 for adults with less than a high school education (from 213 to 199) and for adults with a high school education (from 242 to 232).”
Interestingly, the countries that strongly outperform the US such as Japan and Finland (https://nces.ed.gov/fastfacts/display.asp?id=683 ) seem to be at least as much interested in promoting learning through human interaction as through machine interfaces. See, for example: https://en.wikipedia.org/wiki/Instruments_of_a_Beating_Heart and https://www.eib.org/en/essays/finland-education-school-design .
At any rate, for parents and grandparents, the education of our descendants is far too an important matter to leave to public policy. And whatever else the null hypothesis might imply, it has nothing to say about whether an individual parent can or cannot personally improve the efficacy of their parenting as it relates to their children’s learning. One suspects that in the long run, the populist revolt from below in parents taking responsibility for their own children’s learning outcomes will have far more to do with their children’s success in life than whether or not the child is sat down in some school somewhere with an AI (https://kpmg.com/ca/en/home/media/press-releases/2024/10/students-using-gen-ai-say-they-are-not-learning-as-much.html ). What one might find helpful in promoting a positive outlook and hope inducing is the fervor with which so many have taken up the challenge of achieving real progress. Just a tiny few examples of the many, many recent inspiring substacks: https://romatermini.substack.com/p/the-subversive-art-of-poetry (Subersive Art of Memorizing Poetry), https://emilyedlynn.substack.com/p/autonomy-anything-goes-916 (Autonomy, Does Anything Go?), https://substack.com/home/post/p-157472908 (3 Ways to Include Kids in Gardening).