AI and Academia
reacting to Kiran Garimella's post
Tools like Claude Code, OpenAI’s Codex, and Gemini CLI have transformed what a single researcher can accomplish. This is the biggest step change in how AI can be used since reasoning models showed up. These systems read and modify files on your computer, orchestrate complex data pipelines, write and execute code, iterate on problems across dozens of steps without losing context. …
I’ve spent the past year using these tools for my daily research tasks. In that time, I’ve built scrapers, dashboards, and analysis pipelines that would have previously required months of work. I’ve run visualizations on datasets I collected (just say to qualitatively inspect them), iterated on research questions faster than I ever thought possible, and shipped single-authored papers I never imagined I’d have the bandwidth to complete.
I think that there is right now a major divide between computer-savvy professionals (call them Code Warriors) and ordinary civilians in how they use AI. Ordinary civilians use AI to help them find and organize information in text format. Code Warriors use AI to do work that they would otherwise have done by writing computer programs.
Code Warriors judge AI’s by how well they compare with Integrated Development Environments (IDE’s) in helping them with coding projects. Until late last year, IDE’s were judged superior. The tools that have arrived since then have seemed to aid the Code Warriors, and some are predicting the obsolescence of IDE’s.
Kiran is a Code Warrior. I was a Code Warrior when I was younger. I still have a vague feel for it, but I have not kept up with what Code Warriors work on and how they function nowadays. I missed the IDE era, and I never picked up the skills that Kiran is talking about for obtaining and manipulating data.
Kiran goes on to write,
Too much academic discourse treats AI as a taboo to be avoided or a fad to be ignored. We need to take our heads out of the sand: these tools are becoming a competitive necessity, and pretending otherwise is a disservice to the next generation.
Back in January, when I started teaching at UATX, I wondered if I should feel guilty for assigning students in my Public Choice class a project for which the main purpose was getting them to use AI.1 Now I feel guilty for not doing something similar in my other course.
The economics now favor substituting AI for junior scholars on exactly the tasks that used to train those scholars. This is rational at the individual level and potentially catastrophic at the collective level. I don’t know how to solve this, but I think we need to talk about it more honestly.
…This coming year, I do not plan to admit any PhD students. There are many usual reasons like funding cuts, institutional pressures, and the usual constraints. But there’s another reason I’ve been reluctant to articulate: I felt like I didn’t need one. Not in the way I would have two years ago.
So you used to give Ph.D students projects that turned them into Code Warriors, but now they slow you down too much. How do you help them understand what a Code Warrior knows how to do without giving them a Code Warrior’s projects?
This is an imminent issue in investment banking and law. It could become an issue in medicine. Possibly even in K-12 education. An AI may be more useful than a junior person.
Another problem that Kiran points to is the volume of low-value “research.” Most papers are produced in order to satisfy professors’ desire for tenure and status. This process wastes a lot of resources. With the help of AI, the ability to produce mediocre papers is going to shoot upward.
Society should not be super-charging the wasteful competition to fill journals with useless slop. Instead, the whole academic reward system needs an overhaul.
Still relevant is this essay from last October, by Hollis Robbins.
faculty are still riding horses. You can’t really blame them; they would be building new vehicles on unpaved roads without institutional support. But you can blame administrators. The entire undergraduate structure in the U.S. is calibrated for a steady, predictable trot. And this particular inefficiency seems fine with university leaders.
Faculty are losing access to the best laboratory for understanding how AI works: their own students. Every day students are running experiment on what AI can and cannot do, learning which prompts work and which fail, encountering edge cases and limitations. This is great empirical knowledge for redesigning courses.
The project I assigned was similar to the virtual was museum.


I appreciate your formulation here, Arnold:
> Society should not be super-charging the wasteful competition to fill journals with useless slop. Instead, the whole academic reward system needs an overhaul.
I've had a hard time reconcycling the things I've seen in academia and in academic research, which are often pretty sketchy, with the obvious benefits and progress that comes from the top people in my field and in others. It's just hard to put these two things together.
Your view here ties it together neatly: look at how the funding works. That funding is currently directed via the process of internal review, a.k.a. peer review. My sense of that world is that you really need to scratch people's backs to do well. Tenure and funding are based on your CV, and your CV is based on review by peers.
That is where the incentives lie, and systems will generally evolve based on those incentives, with no intent for it at all by the individual participants in the system. Resources simply go to some people more than others. People without resources fade out, and over time, everyone in the system looks around and sees other people just like them.
It is structurally very similar to a church if you take today's shaman, today's miracle workers, and then direct public funding toward them. What happens is that, no matter who was in that group to start with, you will transform the organization into something that first and foremost keeps the funding source coming.
Hollis has been killing it lately with her takes on AI in academia....this one in particular...
https://hollisrobbinsanecdotal.substack.com/p/attention-is-all-you-need-to-bankrupt