Don’t forget your null hypothesis! In the 1990s “learning sciences” integrated computers and teaching/assessment, with the ideal product something game like that helped students learn. I don’t think it made much of an impact.
I think you guys are mis-interpreting the null hypothesis. I interpret it as basically saying that stimulating the demand for education doesn't do much. Most interventions try to increase the demand for education. Like, literally the Roland Fryer style "I'll pay you to get good grades!" kind of thing.
AI (and the earlier generations of computer-based learning... say Duolingo and Khan) are about the supply of education. Every amount of education is now available at a lower cost.
Regardless of the demand for education, that's a very good thing. I know I've made use of those cheap resources. So in total, they've significantly increased the amount of education consumed.
Sure, the increase in the supply of education is largely sopped up by rent-seeking, but that's not the fault of the technological improvements. It's the fault of the socialist system of production we employ.
AI should make those interactive computer tutors much more responsive to the students needs. It should significantly improve success.
My son taught at a high school for troubled teens two years. Many didn't want to engage at all but some would only engage with the interactive programs.
Should. Should. Those shoulds pulled me up short. Partly because they sounded like the vaporware of the education industry. "I would like this to happen so I will believe that it will happen." Materially, it should happen.
But also because I heard the echo of "Damn it! The world should be like this. To be a good world, it has to be like this." Morally, it should happen.
Not saying you fall into either of those things, but I encountered them quite a bit when I was in the business.
I'm not sure. Depends on what you were trying to say. Were you saying you want AI to make computer tutors "much more responsive to the students"? Were you saying you think AI actually will make them more responsive? Something else?
This all reminds me of 25 years ago, when Sparknotes were used in my high school. Sparknotes were like the Chat-GPT 0.0 before the ChatGPT-3 became well known. They were pamphlet sized books with black and yellow stripes, and whenever a book or play was assigned that was more than 100 pages, some students would use the notes, because it would summarize the story without requiring the users to read the entire book. The only problem was, that some teachers realize students were using them, and a lot of the students were providing the same, canned responses. The benefit of Sparknotes, however, is that it could help one refresh one's memory of the plot lines, but it would not contain every detail, some which were quite important.
If one is still trying to learn how to read, there is no difference between reading Sparknotes and a book, provided they are both use the same level of expository writing. In elementary school, I played Oregon Trail and Number Munchers to learn multiplication tables in the 3rd -5th grade (part of the leisurely time, usually at the end of the day, rather than the math hour, if I recall correctly) to test the speed of my memorization skills on an Apple II (with 5.25" floppy disks)
By having Chat-GPT writing essays for a person, it's not going to help them know what they selected. Like a multiple choice, they aren't going to display competency using the same response as everyone else's Chat-GPT response. The student can personalize their response, but when called upon, they would need to cite those actual events, rather than hypothetical ones, as the hypothetical ones might be the same as someone elses.
It's true a reader can develop reasoning of a concept or understand the plot/motive of a storyline even by reading an accurate summary. But when pop-quizzed on a scene, they might not have enough information from an AI-generated (and machine inferred) summary to make their own inferences. It's kind of like saying Good Will Hunting was a romantic comedy because Mini Driver made a joke in an obscure bar scene.
Also, if I were a professor that used Chat-bots to provide the exam, I would be grading the Chat-bots grades and the students. That's up to twice the work. But as a research exercise, it might be worth discovering what Chat-GPT bots can grade accurately- there is likely to be some false positives and vice versa (it determining an answer was was wrong when it is more accurate and informed than the Chat-GPT).
Why do you think there's nothing bad about students being able to cheat more easily and effectively?
I have tried to use GPT to write paper topics and it's terrible at it. They're always too open ended and unspecific. I've tried a variety of pretty elaborate prompts and never gotten a single usable topic. This makes me think it would also be bad at grading papers in my field but I haven't tried it.
The real barrier to AI integration into education will be teacher unions. They are likely to resist due to concerns about job security and potential replacement of human teachers with AI tutors. They may offer arguments that the human element of teaching is essential for fostering critical thinking, emotional intelligence, and social skills in students. Additionally, unions probably will squawk over privacy and ethical implications of using AI to grade and provide feedback on student work. All of this as cover for the real reason – protecting jobs (and union dues). As example - https://www.govtech.com/education/higher-ed/campus-unions-express-concerns-about-ai-in-higher-ed
To be charitable to the unions: a good deal of teaching, at least prior to college, is personal. It is, "This is interesting, this is important; see how I present it. You can understand this, you can do it. I'll help you." Compared to a screen, humans react differently to the presence of real flesh and blood other human beings.
Now, by the time you get to college, one might hope that you don't need that any more.
Agreed. I had some of those teachers (Here's to you Mr. Valenzuela, and you, Ms. Bennett). They are the ones who really make a difference, regardless of the union. With family and friends in the teaching profession, I've heard the good and bad about the unions. My own personal experience with them in my career (business) was decidedly negative. So I carry that baggage into discussions about them.
Time will tell. And what happened to the null hypothesis?
I have made adjustments at the margins in assessments in my seminars. I give more weight to in-class assessments (presentations and debates). I have individual tutorials with students about the assigned materials, a week or two before their presentations and debates. A component of assessment is the quality of the student's preparation for (i.e., performance in) tutorial conversations.
I encourage students to use the new tech (esp. Elicit), honestly. "Show your homework."
Would this be a strategy for a teacher: give the students a challenge to get the LLM to produce an error with the query (a hallucination not a math error). Then have them explain the error and postulate why the LLM made the error. As LLMs improve, this might become a less effective method. For now, the student would need to know something to know it was wrong.
We won’t need all teachers to figure this out, just a few. And we will then be able to use these entrepreneurial spirits to reinvent education, at least for those of us interested in learning (granted a tiny minority). In ten years we will be able to learn much faster and better on virtually any topic (as long as we acknowledge that 90% of people don’t want to learn anything other than what the newest version of the Kardashians are doing).
When I went to share this on Facebook, I encountered a new “AI label,” which apparently I’m supposed to toggle on if I’m posting AI-generated content. Good luck with that, Mark. Meanwhile, it inspired me to write this message to accompany my share: “The conclusion Kling reaches is no—in fact, the opposite. However, thinking about it, and seeing FB’s new “AI label,” I’m thinking AI could be dangerous to social media. Empowering people to use our own judgment to overcome algorithms and set our own terms and conditions with respect to what content we see, using AI to enforce those conditions, would upend existing models. How long until AI enables a user-programmable social media interface to get traction and disrupts current social media revenue models? Today, social media algorithms are at war with teachers for student focus, attention, and time. How about a Manhattan Project to help students and teachers win? Better yet, a non-governmental disruptor? That could put some of the enormous capital chasing AI to revolutionarily productive use.”
"The more that I think about it, the less that I think AI represents a threat to the quality of education. Instead, it seems to me that it represents a tremendous opportunity."
Yes, AI is an opportunity, especially for tech-savvy or motivated teachers. Not so much for many teachers.
AI is both an opportunity for students to learn and to avoid learning.
I suspect the negatives will outweigh the positives for many years, hopefully less years than I'd guess rather than more.
There will be two kinds of students- those smart enough to figure out how to use these "AIs" to improve their internal skillsets and those who use them as a crutch/hack in order to not improve their internal skillsets. Not any different than the students that don't bother doing any of the problems sets in a textbook vs those that do every problem.
I see a lot of skepticism about AI from various professors for the same reason, I think, that I see a lot of skepticism about AI from various lawyers: they are concerned that AI will replace them. Alternatively, they're stewards of increasingly outdated systems which AI threatens to upend.
If AI could be as good as the essay prophesies, it would indeed be able to replace most college teachers, at least those who are there for content transfer, not for "classroom management" or coaching . But I have to admit I am skeptical.
Don’t forget your null hypothesis! In the 1990s “learning sciences” integrated computers and teaching/assessment, with the ideal product something game like that helped students learn. I don’t think it made much of an impact.
I think you guys are mis-interpreting the null hypothesis. I interpret it as basically saying that stimulating the demand for education doesn't do much. Most interventions try to increase the demand for education. Like, literally the Roland Fryer style "I'll pay you to get good grades!" kind of thing.
AI (and the earlier generations of computer-based learning... say Duolingo and Khan) are about the supply of education. Every amount of education is now available at a lower cost.
Regardless of the demand for education, that's a very good thing. I know I've made use of those cheap resources. So in total, they've significantly increased the amount of education consumed.
Sure, the increase in the supply of education is largely sopped up by rent-seeking, but that's not the fault of the technological improvements. It's the fault of the socialist system of production we employ.
You beat me to it on the null hypothesis!
AI should make those interactive computer tutors much more responsive to the students needs. It should significantly improve success.
My son taught at a high school for troubled teens two years. Many didn't want to engage at all but some would only engage with the interactive programs.
Should. Should. Those shoulds pulled me up short. Partly because they sounded like the vaporware of the education industry. "I would like this to happen so I will believe that it will happen." Materially, it should happen.
But also because I heard the echo of "Damn it! The world should be like this. To be a good world, it has to be like this." Morally, it should happen.
Not saying you fall into either of those things, but I encountered them quite a bit when I was in the business.
What word is better? I would be very uncomfortable saying what it WILL do.
I'm not sure. Depends on what you were trying to say. Were you saying you want AI to make computer tutors "much more responsive to the students"? Were you saying you think AI actually will make them more responsive? Something else?
Huh. I didn't think what I wrote would be unclear. OK.
I expect AI computer tutors will be better in many respects and don't expect they'll be worse in any.
I think his null hypothesis is the right one. But sometimes can be rejected.
AI doesn't pose a threat to education; it poses a threat to educators.
This all reminds me of 25 years ago, when Sparknotes were used in my high school. Sparknotes were like the Chat-GPT 0.0 before the ChatGPT-3 became well known. They were pamphlet sized books with black and yellow stripes, and whenever a book or play was assigned that was more than 100 pages, some students would use the notes, because it would summarize the story without requiring the users to read the entire book. The only problem was, that some teachers realize students were using them, and a lot of the students were providing the same, canned responses. The benefit of Sparknotes, however, is that it could help one refresh one's memory of the plot lines, but it would not contain every detail, some which were quite important.
If one is still trying to learn how to read, there is no difference between reading Sparknotes and a book, provided they are both use the same level of expository writing. In elementary school, I played Oregon Trail and Number Munchers to learn multiplication tables in the 3rd -5th grade (part of the leisurely time, usually at the end of the day, rather than the math hour, if I recall correctly) to test the speed of my memorization skills on an Apple II (with 5.25" floppy disks)
By having Chat-GPT writing essays for a person, it's not going to help them know what they selected. Like a multiple choice, they aren't going to display competency using the same response as everyone else's Chat-GPT response. The student can personalize their response, but when called upon, they would need to cite those actual events, rather than hypothetical ones, as the hypothetical ones might be the same as someone elses.
It's true a reader can develop reasoning of a concept or understand the plot/motive of a storyline even by reading an accurate summary. But when pop-quizzed on a scene, they might not have enough information from an AI-generated (and machine inferred) summary to make their own inferences. It's kind of like saying Good Will Hunting was a romantic comedy because Mini Driver made a joke in an obscure bar scene.
Also, if I were a professor that used Chat-bots to provide the exam, I would be grading the Chat-bots grades and the students. That's up to twice the work. But as a research exercise, it might be worth discovering what Chat-GPT bots can grade accurately- there is likely to be some false positives and vice versa (it determining an answer was was wrong when it is more accurate and informed than the Chat-GPT).
Why do you think there's nothing bad about students being able to cheat more easily and effectively?
I have tried to use GPT to write paper topics and it's terrible at it. They're always too open ended and unspecific. I've tried a variety of pretty elaborate prompts and never gotten a single usable topic. This makes me think it would also be bad at grading papers in my field but I haven't tried it.
The real barrier to AI integration into education will be teacher unions. They are likely to resist due to concerns about job security and potential replacement of human teachers with AI tutors. They may offer arguments that the human element of teaching is essential for fostering critical thinking, emotional intelligence, and social skills in students. Additionally, unions probably will squawk over privacy and ethical implications of using AI to grade and provide feedback on student work. All of this as cover for the real reason – protecting jobs (and union dues). As example - https://www.govtech.com/education/higher-ed/campus-unions-express-concerns-about-ai-in-higher-ed
To be charitable to the unions: a good deal of teaching, at least prior to college, is personal. It is, "This is interesting, this is important; see how I present it. You can understand this, you can do it. I'll help you." Compared to a screen, humans react differently to the presence of real flesh and blood other human beings.
Now, by the time you get to college, one might hope that you don't need that any more.
Agreed. I had some of those teachers (Here's to you Mr. Valenzuela, and you, Ms. Bennett). They are the ones who really make a difference, regardless of the union. With family and friends in the teaching profession, I've heard the good and bad about the unions. My own personal experience with them in my career (business) was decidedly negative. So I carry that baggage into discussions about them.
Time will tell. And what happened to the null hypothesis?
I have made adjustments at the margins in assessments in my seminars. I give more weight to in-class assessments (presentations and debates). I have individual tutorials with students about the assigned materials, a week or two before their presentations and debates. A component of assessment is the quality of the student's preparation for (i.e., performance in) tutorial conversations.
I encourage students to use the new tech (esp. Elicit), honestly. "Show your homework."
And John beat me, too!
Would this be a strategy for a teacher: give the students a challenge to get the LLM to produce an error with the query (a hallucination not a math error). Then have them explain the error and postulate why the LLM made the error. As LLMs improve, this might become a less effective method. For now, the student would need to know something to know it was wrong.
Yes, and…
We won’t need all teachers to figure this out, just a few. And we will then be able to use these entrepreneurial spirits to reinvent education, at least for those of us interested in learning (granted a tiny minority). In ten years we will be able to learn much faster and better on virtually any topic (as long as we acknowledge that 90% of people don’t want to learn anything other than what the newest version of the Kardashians are doing).
The only things dangerous to teachers, or schools is a parent that cares.
When I went to share this on Facebook, I encountered a new “AI label,” which apparently I’m supposed to toggle on if I’m posting AI-generated content. Good luck with that, Mark. Meanwhile, it inspired me to write this message to accompany my share: “The conclusion Kling reaches is no—in fact, the opposite. However, thinking about it, and seeing FB’s new “AI label,” I’m thinking AI could be dangerous to social media. Empowering people to use our own judgment to overcome algorithms and set our own terms and conditions with respect to what content we see, using AI to enforce those conditions, would upend existing models. How long until AI enables a user-programmable social media interface to get traction and disrupts current social media revenue models? Today, social media algorithms are at war with teachers for student focus, attention, and time. How about a Manhattan Project to help students and teachers win? Better yet, a non-governmental disruptor? That could put some of the enormous capital chasing AI to revolutionarily productive use.”
"The more that I think about it, the less that I think AI represents a threat to the quality of education. Instead, it seems to me that it represents a tremendous opportunity."
Yes, AI is an opportunity, especially for tech-savvy or motivated teachers. Not so much for many teachers.
AI is both an opportunity for students to learn and to avoid learning.
I suspect the negatives will outweigh the positives for many years, hopefully less years than I'd guess rather than more.
There will be two kinds of students- those smart enough to figure out how to use these "AIs" to improve their internal skillsets and those who use them as a crutch/hack in order to not improve their internal skillsets. Not any different than the students that don't bother doing any of the problems sets in a textbook vs those that do every problem.
There will also be two kinds of teachers.
I see a lot of skepticism about AI from various professors for the same reason, I think, that I see a lot of skepticism about AI from various lawyers: they are concerned that AI will replace them. Alternatively, they're stewards of increasingly outdated systems which AI threatens to upend.
If AI could be as good as the essay prophesies, it would indeed be able to replace most college teachers, at least those who are there for content transfer, not for "classroom management" or coaching . But I have to admit I am skeptical.