Canvas is likely dead in the long run but it does have a moat of sorts - we use it at my university and part of what it does is certify that readings you post have the proper "accessibility" score to satisfy federal regulators. I upload a reading, it then rates it and I can take steps (mostly giving the full citation to Canvas) that produce a higher score. Department chairs here monitor your accessibiity scores and let you know if they drop. Canvas also manages submission of student assignments, handles grade sheets and links into our grade submission system (yet another legacy system), populates syllabi from the syllabus management system (called, non-ironically, "Simple Syllabus" which automatically populates syllabi with university, department, etc policies, handles the review process by the committee making sure we aren't illicitly teaching "gender ideology", etc., certifies accessibility of syllabi for regulators (a theme is emerging!), etc. An AI can surely build an app to do the substantive tasks - but central IT and general counsel are unlikely to let a 1000 flowers bloom in terms of compliance functions. As usual, the chokehold on innovation is going to be bureaucracy and regulations.
I'd highly recommend the recent Lex Fridman podcast episode with Peter Steinberger on creating Open Claw - https://www.youtube.com/watch?v=YFjfBk8HI5o - very interesting discussion of the future of agentic AI.
And because of that, it will cease to exist wherever it is tried. It may not explicitly be dropped that the standard of an "A" will be lowered enough so most everyone can pass.
The education reform of the early century had "high stakes tests". In many states students had to take them and pass them in order to graduate. But the passing rates were terrible. So the "cut score" was lowered, and the tests were made easier. In many states, the tests were dropped, or passing was no longer required for graduation.
On research papers: The reason I did not go into academia way back in 1991 was because it was clear that what that mattered was the number of papers, not the quality. You needed a long list of publications to secure funding and/or tenure.
So you gamed the system with the "Least Publishable Unit." Each paper had just enough "new" content to clear peer review, even if 50-70% of it was just a recap of earlier work.
AI will destroy this, if it hasn't already. The number of papers a researcher can produce by slicing and dicing with AI is now so large that it's meaningless when being evaluated for money or tenure.
Arnold writes: I envision the AI quizzing students and asking follow-up questions to determine mastery. You need a system to deter students from using AI to answer. Special examination rooms with human monitors?
In 2023, I took the CFP(r) exam in a monitored room replete with observation cameras, but only after locking my phone/ keys/ lunch in a cubicle, and then going through a body search and inspection of my HP 12c calculator on the way to an assigned workstation. Most exams are not going to be six hours, but the queue into that room was measured by the efficiency of the admitting workers. FWIW, many others were in that room and taking monitored exams for other purposes. This approach encompassed best-know methods to thwart cheating. Such precautions may be the price required to ascertain mastery of a subject. Unintended consequences of a braver new world??
I’m sure it’s possible to get an ai to read a paper, and the papers cited, and then create a paper specific set of questions relevant to the course & the paper. Which the student could then spend some 30-60 minutes of time on, with both an ai & a human grader of the short or long verbal answers, demonstrating mastery of the course thru the specific written paper. With ai help & instruction on the paper as assumed, along with ai (& professor) accuracy.
The two points of education are to ensure background knowledge like multiplication tables, and processes of thinking about issues. Testing students is the main job of teachers, and grades the main motivation of students. The more they need for the test, the more will be learned.
There won’t be a dead classroom, ai teachers teaching AIs, but there are & will be students using ai, & other methods, to get a good grade with a minimum of (hard work!) learning.
I was just a visitor. They seem likely to ask me to return next year. Austin is too far from our grandchildren to make a permanent move attractive for me.
"My own thinking is that students should use AI teaching tools to obtain mastery of concepts. Mastery rather than grading. Under a mastery system, everybody gets an A only because everybody needs an A in order pass." Well, there is a bit of this already at most universities, and it is a contribution to grade inflation! Of course, it won't work for all the skills one wants to develop, such as thinking hard about something or coming up with something new.
It's hard to tell the narratives about various historical heroic figures in any field without creating the strong urge to preserve the imitation of the academic patterns of those figures' stories. If all those heroes from 50, 75, or 100 years ago went to college classes with lectures and grades (literally the "old school" approach), then got their PhDs, and "published important papers" that "revolutionized their fields" or "gave a series of talks which inspired a whole new generation of researchers" and so forth, then that writes a kind of "life script" that other people dream about following, in part because by following in the same pattern of previous elites, they are displaying the signal to others of exactly the kind of milestones associated with those revered elites. This mechanism produces a kind of lower-case-c conservatism in the institutions and cultures of many fields, not just scholarship or academics, anything with former generations of elites and heroes. Military, finance, entertainment, etc.
When you are asking people to be revolutionary and throw all that out for practical, outcome-based reasons due to new context and new technologies, you are also asking them to throw out this whole human-patterning scripting, dreaming, and language of status-signaling. You are asking them to dream new dreams, but in the new dream there is no longer a place for people like them who have invested their lives doing things the "old school" way. Unless there is some way for them to "preserve their rank, seniority, and pay-grade" in the new system with their old credentials, well, they are going to fight, and come up with endless bogus rationalizations to try and justify that fight.
Canvas is likely dead in the long run but it does have a moat of sorts - we use it at my university and part of what it does is certify that readings you post have the proper "accessibility" score to satisfy federal regulators. I upload a reading, it then rates it and I can take steps (mostly giving the full citation to Canvas) that produce a higher score. Department chairs here monitor your accessibiity scores and let you know if they drop. Canvas also manages submission of student assignments, handles grade sheets and links into our grade submission system (yet another legacy system), populates syllabi from the syllabus management system (called, non-ironically, "Simple Syllabus" which automatically populates syllabi with university, department, etc policies, handles the review process by the committee making sure we aren't illicitly teaching "gender ideology", etc., certifies accessibility of syllabi for regulators (a theme is emerging!), etc. An AI can surely build an app to do the substantive tasks - but central IT and general counsel are unlikely to let a 1000 flowers bloom in terms of compliance functions. As usual, the chokehold on innovation is going to be bureaucracy and regulations.
I'd highly recommend the recent Lex Fridman podcast episode with Peter Steinberger on creating Open Claw - https://www.youtube.com/watch?v=YFjfBk8HI5o - very interesting discussion of the future of agentic AI.
Insisting on mastery in that fashion will likely thin out the student body almost instantly.
And because of that, it will cease to exist wherever it is tried. It may not explicitly be dropped that the standard of an "A" will be lowered enough so most everyone can pass.
The education reform of the early century had "high stakes tests". In many states students had to take them and pass them in order to graduate. But the passing rates were terrible. So the "cut score" was lowered, and the tests were made easier. In many states, the tests were dropped, or passing was no longer required for graduation.
On research papers: The reason I did not go into academia way back in 1991 was because it was clear that what that mattered was the number of papers, not the quality. You needed a long list of publications to secure funding and/or tenure.
So you gamed the system with the "Least Publishable Unit." Each paper had just enough "new" content to clear peer review, even if 50-70% of it was just a recap of earlier work.
AI will destroy this, if it hasn't already. The number of papers a researcher can produce by slicing and dicing with AI is now so large that it's meaningless when being evaluated for money or tenure.
Good riddance.
Arnold writes: I envision the AI quizzing students and asking follow-up questions to determine mastery. You need a system to deter students from using AI to answer. Special examination rooms with human monitors?
In 2023, I took the CFP(r) exam in a monitored room replete with observation cameras, but only after locking my phone/ keys/ lunch in a cubicle, and then going through a body search and inspection of my HP 12c calculator on the way to an assigned workstation. Most exams are not going to be six hours, but the queue into that room was measured by the efficiency of the admitting workers. FWIW, many others were in that room and taking monitored exams for other purposes. This approach encompassed best-know methods to thwart cheating. Such precautions may be the price required to ascertain mastery of a subject. Unintended consequences of a braver new world??
“A guide on the side, not a sage on the stage”. Comes from 1993 article though idea goes back at least to Rousseau.
I’m sure it’s possible to get an ai to read a paper, and the papers cited, and then create a paper specific set of questions relevant to the course & the paper. Which the student could then spend some 30-60 minutes of time on, with both an ai & a human grader of the short or long verbal answers, demonstrating mastery of the course thru the specific written paper. With ai help & instruction on the paper as assumed, along with ai (& professor) accuracy.
The two points of education are to ensure background knowledge like multiplication tables, and processes of thinking about issues. Testing students is the main job of teachers, and grades the main motivation of students. The more they need for the test, the more will be learned.
There won’t be a dead classroom, ai teachers teaching AIs, but there are & will be students using ai, & other methods, to get a good grade with a minimum of (hard work!) learning.
“ I was aiming to do this myself at UATX.”
Does your use of “was” mean that your association with UATX - at least as a professor - is over?
I was just a visitor. They seem likely to ask me to return next year. Austin is too far from our grandchildren to make a permanent move attractive for me.
"My own thinking is that students should use AI teaching tools to obtain mastery of concepts. Mastery rather than grading. Under a mastery system, everybody gets an A only because everybody needs an A in order pass." Well, there is a bit of this already at most universities, and it is a contribution to grade inflation! Of course, it won't work for all the skills one wants to develop, such as thinking hard about something or coming up with something new.
It's hard to tell the narratives about various historical heroic figures in any field without creating the strong urge to preserve the imitation of the academic patterns of those figures' stories. If all those heroes from 50, 75, or 100 years ago went to college classes with lectures and grades (literally the "old school" approach), then got their PhDs, and "published important papers" that "revolutionized their fields" or "gave a series of talks which inspired a whole new generation of researchers" and so forth, then that writes a kind of "life script" that other people dream about following, in part because by following in the same pattern of previous elites, they are displaying the signal to others of exactly the kind of milestones associated with those revered elites. This mechanism produces a kind of lower-case-c conservatism in the institutions and cultures of many fields, not just scholarship or academics, anything with former generations of elites and heroes. Military, finance, entertainment, etc.
When you are asking people to be revolutionary and throw all that out for practical, outcome-based reasons due to new context and new technologies, you are also asking them to throw out this whole human-patterning scripting, dreaming, and language of status-signaling. You are asking them to dream new dreams, but in the new dream there is no longer a place for people like them who have invested their lives doing things the "old school" way. Unless there is some way for them to "preserve their rank, seniority, and pay-grade" in the new system with their old credentials, well, they are going to fight, and come up with endless bogus rationalizations to try and justify that fight.
I agree with your thrust, I believe.
A shorter way of phrasing it might be:
“Unless and until there are proper incentives for those producing research to change their behavior, most are very unlikely to change their behavior.
Incentives matter.”