Some Links, 9/24/2025
Moses Sternstein on health care; Steven Pinker on common knowledge; Lionel Page on long time horizons and civilized behavior; J. Zachary Mazlish on AI and economic growth
In general, this daisy-chain of ‘someone else pay for my healthcare’ has made healthcare much more expensive, and we’d be much better off if we just tried to pay for it ourselves
The problem is that we associate health care with something given (the gift of health) to people who are suffering. Asking people who are suffering to pay feels wrong. It’s like the ancient prohibition against usury—don’t make someone in poverty pay interest on the money that he needs to borrow to feed his family.
But the fact is that many medical services do not directly relieve suffering. Maybe a colonoscopy will prevent colon cancer, but chances are it will be irrelevant to how long you live. Maybe the MRI will reveal something that allows a doctor to relieve your back pain, but chances are it won’t. Maybe seeing a cardiologist regularly will make a difference to your longevity, but chances are it won’t. Keeping your relative alive on a feeding tube maybe seems like you’re doing him a favor, but chances are you’re not.
And even if you are suffering and medical services can help, they still cost something. If you can get relief by using a cheaper service, then your choice will be distorted if someone else is paying for it.
Interviewed by Yascha Mounk, Steven Pinker says,
common knowledge is necessary for coordination: for two or more people being on the same page
I pre-ordered the book.
One of the key differences between our ancestral past and our present is the length of our time horizon when making decisions. It is much, much longer now…
the time horizon that is practically relevant to humans has been stretched to incredible lengths…
As the time horizon gets longer, people are more and more able to approximate full cooperation.
Perhaps his main point:
our propensity to entertain violence is likely miscalibrated to our modern world, where the cost of violence is very high for the perpetrator.
The point of this section is not to argue that we will get true AGI that can substitute for all human labor in the economy. The point is that conditional on doing so, we are going to get explosive growth. So if you don’t think we are going to get explosive growth any time in the next 20-30 years, you are making a prediction about future AI capabilities, not just about economic bottlenecks.
I do not think we will get true AGI. I think that the combination of the physical world being difficult to model, complex tasks being difficult to accomplish, and the need to have memory of all sorts of human interactions will keep computers from becoming know-it-alls.
Mazlish apparently agrees with me that these are the main challenges to overcome, but he thinks that they will be overcome.
The reason AI can’t do the job (yet) is that it doesn’t have sufficient coherence across a longer time-horizon. Ergo, monitoring task-time’s is, for now, the best way of monitoring AI progress.
…If we cross our fingers and pray that straight lines continue, we find that a one-week time-horizon with 80% completion-reliability will be reached by 2030-2031, while a one-year horizon will be reached in 2034.
At that point, he thinks we will be only a few years away from AGI.
substacks referenced above: @
@
@
substacks referenced above:
@





I think it is a fallacy to assume AI being able to economically out-compete a lot of human labor means explosive growth, as opposed to a slightly higher rate of growth. There are still going to be scarcity of key inputs and various important bottlenecks, some material, some conceptual (discovery, innovation) and some regulatory (including important geopolitical considerations). Consider electricity, prices have recently gone way up, everything up and down the chain is tight, and production growth is paltry compared to demand. Yeah, "that's what's booms are always like", ok, but this boom is just going to keep booming now because power = production = money. Even with big efficiency gains, AI still can't explode without electricity exploding, and, domestically, it's not, it can't keep up. We need 1,000 more nuclear reactors in a decade - yes, one thousand, two new brought online per week - and we'd be lucky to get two -total- the normal way at the currently expected rate. Things are changing fast, but they can still only get physically built and operating in reality so fast. China builds biggest fastest, so in retrospect we'll probably say our stupid refusal to allow ourselves to keep pace is how they won the future.
Regarding AI, it is worth listening to the Dwarkesh interview with Sergey Levine .
https://www.dwarkesh.com/p/sergey-levine
Levine is a Professor at Berkeley and is working on using LLM like models for physical robots. His estimate is that in about 5 years there will be useful physical robots.
It doesn't really matter if these are AGI.
A robot that could be really useful in construction, agriculture, mining and other physical tasks would be transformative.