LLM links, 8/31
Alice Evans on language models in the classroom; and Evans on helping students adapt to AI; ASU leans in to AI; John Bailey on findings on AI in education
Instead of asking students to ‘summarise the readings’, we’ll use Claude, an AI assistant, to analyse papers and explore alternative hypotheses. Students will be encouraged to dig deeper, applying rigorous scrutiny to AI-generated insights.
She is willing to experiment. Probably some of her experiments will not go well. But I agree that it is better to try to have the technology work for you than to just ignore the possibilities.
Subsequently, Evans writes,
My strategy is thus two-fold: teaching technological literacy, so that students use AI carefully and productively, while simultaneously changing my assessments so that they go beyond AI’s current capabilities, and instead assess students’ ingenuity.
Her assessment questions are extremely challenging. For example,
How does evidence from historians like Cook and J.R. Neil challenge the conventional wisdom from political science about the roots of authoritarianism in the Middle East and North Africa?
…Identify the primary driver of low fertility in South Korea. Design an intervention that would tackle binding constraints. Provide justification and academic references.
My guess is that the best students will benefit from her approaches. But I would be pessimistic about how well the typical student might do.
OpenAI (ChatGPT) announces a partnership with Arizona State University.
In February 2024, ASU invited faculty and staff to submit proposals for using ChatGPT to maximize their teaching, research, and operations. Submissions required a clear plan for integrating ChatGPT into one of three priority areas:
Supporting teaching and learning: Proposals that enhance the educational experience for students and faculty within a class setting.
Advancing research for public good: Proposals that support student—and faculty-led research that demonstrate a clear path to making meaningful community and planet contributions.
Enhancing the future of work: Proposals that contribute to a more positive, productive, and supportive work environment.
Within weeks, the ASU team received proposals representing more than 80% of ASU’s schools and colleges. ..
By July, ASU had received over 400 proposals, with more than 200 projects activated across the majority of their departments and colleges.
Pointer from Rowan Cheung.
One of the most promising areas for AI in education is in automating the grading process, particularly for short-answer questions. The study, “Can Large Language Models Make the Grade?” examined how well GPT-4 and GPT-3.5 graded student responses across subjects and grade levels. Remarkably, GPT-4’s grading accuracy scored 0.70 on the measurement scale, which is nearly as high as the 0.75 score human graders achieved.
substacks referenced above:
@
@
Mollick had a post yesterday related to this topic. One section, "Encouraging, not replacing, thinking" is pertinent. But it all boils down to remediating our education process, in my mind, and I think there will be winners (homeschooling and people like Evans, for example) and losers (public schools dominated by unions and mandarins).
“Students will be encouraged to dig deeper, applying rigorous scrutiny to AI-generated insights.” Brilliant. Thank you. I plan on doing this with my own kids to enhance discussion of things we read. Following the link, I see she has written a Tiny Textbook. More Tiny Textbooks here on Substack please. When is Dan Williams going to write one?