LLM links, 2/21/2025
Hollis Robbins on AI and higher ed; Anthropic researchers on how Claude is used; Michael Strain on AI impact; Yann LeCun on the limits of LLMs
The AGI systems launching now can reason, learn, and solve problems across all domains, at or above human level…
in the AGI era, the only defensible reason for universities to remain in operation is to offer students an opportunity to learn from faculty whose expertise surpasses current AI. Nothing else makes sense.
Professors who are worried about students using AI to cheat on assessments are focused on the wrong problem. The right problem, according to Robbins, is to figure out what the human professor can teach that the student could not learn for himself or herself using AI.
Kunal Handa and co-authors from Anthropic write,
Our analysis reveals that computer-related tasks see the largest amount of AI usage, followed by writing tasks in educational and communication contexts.
Their chart shows that 37 percent of conversations with Claude are in the realm of “computer and mathematical,” which completely dwarfs the percentage in any other individual realm. Pointer from Tyler Cowen.
Interviewed by James Pethokoukis, Michael Strain says,
What generative AI offers the promise of is to give every student his or her own teacher that can move quickly through material that the student has already mastered, that can linger on aspects of the material that the student finds challenging, and this, I think, could really revolutionize education in a way that makes graduates of high school substantially better-educated with substantially more skills that they can take into the workforce or take into higher education.
Even more important than the United States is this technology for parts of the world where current educational quality is much lower. This technology could unlock the economic potential of Africa, of India, of parts of South America where there are a lot of kids who just don’t have access to good schools. And if you’re talking about bringing hundreds of millions of kids into the ranks of the world labor force that are educated at the high school level, that could have profound implications for the level and pace of invention, of innovation, and of productivity gains.
I hope this is right. But at least since the advent of television, pundits have predicted dramatic improvement in education. So far, it has not happened.
YouTube has Yann LeCun’s talk at the AI Action Summit in Paris. Around minute 6 he gets into why Large Language Models are not the path to advanced machine intelligence. He re-emphasizes this point in the last few minutes of the talk.
I think that the intuition gets back to the point that came up in my Substack Live discussion with James Cham. I was saying that the combination of LLMs and robots ought to produce great results. James said that the problems that robots deal with are more complicated. The three-dimensional real world is much harder to deal with than text, and the algorithms that work for text do not simply transfer to that more complex problem.
I think that LeCun would say that the multimodal capabilities that have been added to chatbots are mere “hacks.” They will not generalize to enable the models to solve real-world problems.
My guess is that those of us here do not have enough deep understanding to evaluate LeCun’s pessimism concerning LLMs.
substacks referenced above:
@
"But at least since the advent of television, pundits have predicted dramatic improvement in education. So far, it has not happened."
That's because the pundits are all about the supply side. They don't even bother to think about demand. But the terrible, horrible, no good, very bad, truth is that most young people don't have much inherent interest in most of what they are supposed to learn to be considered "educated".
So they don't rush to take advantage of all these new possibilities. It doesn't really matter much if something you're not interested in is presented in a textbook or on a computer screen, or in an answer from an LLM. You are probably not going to make a great effort to engage with it, to think about it, to try to make sense of it. You'll probably do what you've done since grades began to matter. You "memorize and forget": pack enough into short-term memory to get an acceptable mark on a test and then allow the knowledge to "decay".
Of course, some students do care about some of the curriculum. That includes most of the people who comment here. LLMs might have made a difference to them.
I’ve found that LLMs are fantastic for rounding out and extending ideas that you already have. I’m using the most advanced o3 model to start my new company. I have an idea I want to turn into a company, o3 then helps me get a lot of the thoroughness work and extensions of the core idea done. It researches aspects of the idea I haven’t thought of with deep research (thoroughness) and it turns my idea into one pagers or marketing materials or it developed an assessment based on the idea, etc. That’s saving me hundreds of hours and a lot of cost. It’s a force multiplier but when I ask it for truly creative ideas of the type I’m founding my new company on, it is underwhelming.