AI Links, 4/22/2025
Practice flirting with an AI? Scott Alexander and Daniel Kokotajlo make predictions; Hollis Robbins on robotic instruction; Narayanan and Kapoor on AI as normal tech; Ethan Mollick's reaction
Users are scored on a three-flame scale, with the AI offering real-time feedback throughout the experience—whether they're making progress or falling flat. The more charm, humor, and wit they bring and the smoother the conversation the closer they get to earning all three flames. But if they’re rude or missing the mark, the AI steps in with prompts like suggesting they tone down the sarcasm or dig deeper with follow-up questions to help them improve and keep the chat on track.
Pointer from Rowan Cheung. Is there any scenario in which this turns out to be a good idea? All I can imagine is the AI teaching you to have a fake personality. But maybe that’s what dating apps do regardless. Fortunately, I’m too old and too married to be interested in finding out.
we [Scott and Daniel Kokotejlo] think that 2025 and 2026 will see gradually improving AI agents. In 2027, coding agents will finally be good enough to substantially boost AI R&D itself, causing an intelligence explosion that plows through the human level sometime in mid-2027 and reaches superintelligence by early 2028. The US government wakes up in early 2027, potentially after seeing the potential for AI to be a decisive strategic advantage in cyberwarfare, and starts pulling AI companies into its orbit - not fully nationalizing them, but pushing them into more of a defense-contractor-like relationship.
Because Cal‑GETC has already fixed the learning outcomes, those outcomes can be embedded as system prompts in the AI/LLM. The “class” becomes a structured dialogue: each week students receive a scenario prompt – a statistics table on wildfire frequency, a paragraph from John Steinbeck, a primary‑source budget graph – together with a guide to the intellectual understanding expected. Students refine the prompt, interrogate the model’s answer, and produce a brief artifact demonstrating competence. Every exchange is logged, every artifact appended, forming a transparent portfolio of learning.
Her point is that California already effectively mandates the basic curriculum in the opening semesters at state institutions, so why not go with a robot to teach it? I would add that if you want to try to detect “cheating using AI,” what better than an AI to do the detection?
Arvind Narayanan and Sayash Kapoor write,
we think that transformative economic and societal impacts will be slow (on the timescale of decades), making a critical distinction between AI methods, AI applications, and AI adoption, arguing that the three happen at different timescales.
They see jobs changing to ones that involve controlling AI.
Before the Industrial Revolution, most jobs involved manual labor. Over time, more and more manual tasks have been automated, a trend that continues. In this process, a great many different ways of operating, controlling, and monitoring physical machines were invented, and what humans do in factories today is a combination of “control” (monitoring automated assembly lines, programming robotic systems, managing quality control checkpoints, and coordinating responses to equipment malfunctions) and some tasks that require levels of cognitive ability or dexterity that machines are not yet capable.
So today’s post will link to the two extreme views of AI. One view is that it is something spectacular and that it is already here (Alexander and Kokotajlo). The other view (Narayanan and Kapoor) is that AI is like other technologies. That is, the applications and adoption will arrive gradually, so that its impact will not be felt as much in the near future as in the long run. I’m more in line with the latter view.
as many people have pointed out, technologies do not instantly change the world, no matter how compelling or powerful they are. Social and organizational structures change much more slowly than technology, and technology itself takes time to diffuse. Even if we have AGI today, we have years of trying to figure out how to integrate it into our existing human world.
Of course, that assumes that AI acts like a normal technology, and one whose jaggedness will never be completely solved. There is the possibility that this may not be true. The agentic capabilities we're seeing in models like o3, like the ability to decompose complex goals, use tools, and execute multi-step plans independently, might actually accelerate diffusion dramatically compared to previous technologies. If and when AI can effectively navigate human systems on its own, rather than requiring integration, we might hit adoption thresholds much faster than historical precedent would suggest.
substacks referenced above:
@
@
@
I appreciate the link! Embedded in my piece is my very impolitic view (which comes from first hand knowledge, as a dean who used to read end-of-semester student teaching evaluations closely) that most of the grueling work of teaching the general education (GE) courses in the CSU is done badly, or let's say at a mediocre level, with some great instructors. But it's the luck of the draw the new "doesn't matter who is teaching" paradigm takes away any incentive for improving instruction. So at the very least, if AI is delivering instruction, the value can be measured. Seriously, more people should look under the hood at the CSU.
My feeling is the raw intellectual horespower of these models is beginning to plateau. o3 is demonstrably worse than 01 pro on various metrics (just compare their model score cards), and the big reason why OpenAI chooses such silly model names is it obfuscates the fact that most of their recent releases are not so much true breakthroughs but products that occupy specific niches.
However, I think they're definitely smart enough as-is to give organizations years of work to fully harness their benefits, but the big bottleneck right now is they don't interact with different systems very well, and that's the work most people spend their time on.
So there's a lot of tooling that still has to be built for these things and that's where most of the interesting work lies ahead.
However, it's not a slam dunk that they'll be able to make the leap from omniscient question answerers to functional, white-collar quality system integrators, but I'm curious to find out.
I also think the transition to agentic AI's might make smaller customized models more feasible. Most organizations don't need an AI that can solve fields-medal calibre mathematical problems, they need one that can adequately internalize the proprietary information that the company works with. A lot of that stuff just isn't represented on the internet very well, so I can definitely see where industry specific AI tools are built in conjunction with companies that work in it to solve very specific problems for them with big licenses being charged to similar companies. There's probably a long enough tail in this endeavor that it's not feasible for the hyperscalers to cover them all.