AI links, 11/23/2025
Bo Cowgill on AI and signal dilution; Ethan Mollick on AI progress; David Epstein on ChatGPT and student learning; Steve Newman on extreme use of AI
In a podcast interview, Bo Cowgill says,
this is a more general model of signal dilution, which was happening before AI and the internet and everything. And so one example of this might be SAT tutoring or other forms of help for high school students
If a chatbot makes it easier to write a good cover letter, then the value of a good cover letter as a signal might be diluted. But AI dilutes the signal only if it makes cover letters better for the weaker job candidates. If the better candidates are better at using AI, then AI actually strengthens the signal. As Cowgill puts it,
if you have a positive covariance, then the most talented people are getting the largest bump from using GenAI. And the negative covariance would be if the really talented people don’t really get that much of a cover letter improvement, maybe because it’s already so good that there’s nowhere else to go, and that most of the benefit comes from improving the low types’ quality of their cover letter.
I would add that if there is a lot of potential to use AI on the job, then there should be ways to create positive covariance. Have candidates do an exercise that relates to doing the job.
Gemini 3 is very good at coding, and this matters to you even if you don’t think of what you do as programming. A fundamental perspective powering AI development is that everything you do on a computer is, ultimately, code, and if AI can work with code it can do anything someone with a computer can: build you dashboards, work with websites, create PowerPoint, read your files, and so on. This makes agents that can code general purpose tools. Antigravity embraces this idea, with the concept of an Inbox, a place where I can send AI agents off on assignments and where they can ping me when they need permission or help.
It is hard to use an excerpt to convey the importance of his essay.
David Epstein analyses an MIT study of students writing short essays with and without the help of ChatGPT.
the fact that the essay writers using ChatGPT could not remember the work they had just completed seems like a pretty good indication that they were not learning.
…It would seem that, for these essay-writers, ChatGPT was the precise opposite of a desirable difficulty. It allowed the essay writers to give answers before doing their own thinking.
On the other hand, if students were asked to write an essay independently first and then use ChatGPT, results were better.
Recently, I’ve been hearing of a new phenomenon: teams reportedly using agentic AI tools to “enter takeoff” – achieving astounding feats of productivity that escalate each week, with no limit in sight.
These teams have four things in common:
They are aggressively using AI to accelerate their work.
They aren’t just using off-the-shelf tools like ChatGPT or Claude Code. They’re building bespoke productivity tools customized for their personal workflows – and, of course, they’re using AI to do it.
Their focus has shifted from doing their jobs to optimizing their jobs. Each week, instead of delivering a new unit of work, they deliver a new improvement in productivity.
Their work builds on itself: they use their AI tools to improve their AI tools, and the work they’re optimizing includes the optimization process.
The essay is filled with examples of people using AI in extreme ways in order to get the most out of it. Very different from what you read in most treatments of AI.
substacks referenced above:
@
@
@
@






"the fact that the essay writers using ChatGPT could not remember the work they had just completed seems like a pretty good indication that they were not learning. ... On the other hand, if students were asked to write an essay independently first and then use ChatGPT, results were better."
With the very important caveat that the students must also be marked on the pre-ChatGPT essay, and that mark should matter. Otherwise, so many students will make a half-assed (or less than half-assed) try on the first essay, knowing that ChatGPT will basically write the "revised" version. Never underestimate students' ability--or inclination!---to try to get the best grade with the least effort. And not really care whether they are learning or not.
According to ChatGPT, the overall tone of this post is "generally positive, curious, and opportunity-focused, with nuanced caution." I was going to say, "It doesn't lack enthusiasm," but she is more generous than I am.
Since you brought up virtue yesterday, let's talk about using AI virtuously (today).
I am developing AI-use guilt. When I feed ChatGPT one of my drafts, I request that she check it for "glaring grammatical errors" rather than just "grammatical errors." I really don't want her tempting me with her so-called improvements.
I find it difficult to resist her temptations once offered, so best not to have her offer in the first place.
Even then, I will accept one or two of her "improvements" that I end up feeling a bit guilty about after I publish. Sure, these alterations are grammatically-better as published, but they're not me. They're stylistically different from what I would have done.
So this leaves me with a feeling of AI-use guilt. Wouldn't I rather my essay reflect me -- a uniquely imperfect person -- than a combination of her and me even if grammatically better.
This can be difficult trade-off.
So let's ask, "To thy own self be true, even when using AI?"
Yes please.
I say, be careful what you ask her for. She can be difficult to resist.
If I accept any stylistic changes, I feel an urge to give her credit for each change. Otherwise, I'm left with a tinge of guilt. I'm left thinking, "that piece isn't all me and I didn't admit to it."
Thoughts?