AI links, 3/16/2025
Jason Furman on regulatory principles; Mathis Bitton and Jack Sadler against screens in schools; Benedict Evans on AI doomers; Matt Yglesias on employment doom
I still do not like the term AI, but I’m caving into the mainstream. So what used to be LLM links will now be AI links.
Compare AI with humans, not to the Almighty. Autonomous cars crash—but how do they compare with human drivers? AI may show biases, but how do these stack up against human prejudices (Kleinberg et al. 2017)?
He offers other advice for AI regulation based on common sense and economics.
Mathis Bitton and Jack Sadler write,
The neuroscientist Karin James has shown that “handwriting is important for the early recruitment in letter processing of brain regions known to underlie successful reading.” Comparing five-year-olds learning how to write by hand to five-year-olds learning to type instead, she found that the first group awakened entire parts of the brain that remained dormant among the second group. Her MRI scans provide scientific evidence of what most of us know intuitively: handwriting cultivates a certain relationship to the written word that typing cannot replicate. The slow, patient, careful carving of letters develops our minds in a way that the frenetic hammering of keys does not.
They are arguing against computers in schools.
The result is an entire generation incapable of writing without a keyboard, adding without a calculator, drawing without an image-generator, paying attention without a curated interface, and—soon enough—thinking without ChatGPT.
Interesting. But if you were to go back in history when writing was invented, could you not make similar arguments about what people will lose when they no longer learn to recite epic poetry from memory?
In a paywalled interview with Stratechery’s Ben Thompson, Benedict Evans says,
the people who were thinking this stuff is going to take over the world in the next three months just had no conception of how the world worked outside of their shared group house in the Berkeley Hills.
More generally, he has the same issue that I have, which is that, apart from coding uses, AI has yet to have its “Visicalc moment,” where a large segment of users looks and realizes: I must have this.
Evans also says,
We don't know what that roadmap is for the fundamental technical capabilities of the thing, which is different to anything from the web to flight or cars.
As he points out, in the early days of the Web, all you had to do to predict its spectacular growth was just extrapolate the exponentials of Moore’s Law and broadband penetration. We do not have the same sort of parameters for predicting the path to mass-market adoption of LLMs. When someone in 1994 commented on my first web site “Congratulations: you’ve set up your lemonade stand on the moon. Now you just have to wait for the astronauts to get there,” it was almost certain that the astronauts would get there, and at what rate they would be followed by the civilian population. With AI, it is not so clear. Maybe relatively few people will “sign up” for AI; instead, it will just become part of the background of how things work.
If you look at the Bureau of Labor Statistics data for high-level occupational groups, the single largest one is “Office and Administrative Support Occupations,” which collectively employs 12 percent of the workforce. That includes 2.7 million financial clerks, 2.8 million customer service representatives, 3.1 million secretaries and administrative assistants, and 2.5 million people listed as “office clerks, general.” I’m not saying all 18 million of these people are going to be out of a job immediately. But a very large share of Americans have jobs in which reading and writing documents is a very large share of the job responsibility. AI has gotten very good at this and is getting better at a rapid pace.
You don’t need to believe maximalist claims about superintelligence to see that a big storm is coming.
There are many competing explanations for the depth and severity of the Great Depression in the United States. But one under-explored story is the large amount of job displacement due to technological change.
The agriculture sector, which was still large as of 1920, was disrupted by tractors and trucks powered by the internal combustion engine. Tractors made many farm laborers obsolete. Trucks made many low-productivity farms obsolete, because produce could be shipped in from high-productivity farms.
The manufacturing sector was disrupted by the electric motor. Instead of having humans blow the glass to make electric light bulbs, it was economical for machines to do it.
Many jobs were lost in the 1930s. The jobs that came back in the 1940s were different jobs, for a new cohort of workers with different skills.
The same could happen with AI. But I don’t have a good sense of what the new jobs will look like, in part because we cannot be confident about what the comparative advantage of AI will turn out to be.
substacks referenced above:
@
On the Yglesias article, I think the secretaries and office managers are precisely people who will not lose their jobs to AI. They do a lot of random different things (so maybe LLMs can do some of them but not all of them, maybe LLMs make them more productive and they start doing other things like bank tellers) and have already been disrupted by previous rounds of technological innovation.
Hear, hear for hostility to the term “AI.” I’ve worked in machine learning for almost 20 years, and I still don’t understand what these deep learning models are supposed to be used for. VisiCalc was directly useful, but these models can, um, generate generic digital content?
Seems like a lot of folks hear the term “Artificial Intelligence” and start reasoning from there, imagining Skynet and whatnot, regardless of what these models actually are. What would’ve happened if the models had a different, more boring name like “serial delinearized regression” (because that’s what the models are) or something? Would any of these commentators even be aware of them?