Discussion about this post

User's avatar
John Alcorn's avatar

Arnold, Thanks for the simple, incisive explanation of the real economics of social security and medicare benefits.

Expand full comment
Mike Maletic's avatar

On Smil’s article: I don’t agree at all. My brother and I were having a conversation last night about a scene in the 1970s Invasion of the Body Snatchers that involves pressing the “button” of a receiver in an old-school telephone and leaving the phone “off the hook” so a person couldn’t call again. This sequence of events completely baffled by teenage niece. If you went back in the 1970s and said “what part of the movie will people be confused about in 2023?” they would never pick out the telephone. But in fact, the “phone” qua phone will be a remarkably short-lived technology in the grand scheme of human technological progress, and what our children and grandchildren will call a “phone” bears only a passing resemblance to its namesake.

And this is my problem with the entire premise of the article: he cherry picks areas that match his conclusions, and proceeds from there. But we will not understand what technologies today will make a significant and lasting impact on tomorrow, and which will vanish. My experience, in my lifetime, is that human technical achievement accelerates at a phenomenal pace. The world that my children, now teenagers, were born into would be foreign to them today. No iPhone, no iPad, no TikTok, no Netflix, no electric car, no mRNA vaccines, no space x reusable space rocket synchronized landings, no video calling, no AI chatbots, etc.

And his semiconductor comment is purely trash, I have to say. That’s the sort of comment you’d expect to see from someone with a New-York-Times-tech-journalist level of understanding of the technology and related industries. Sure, the size of the features you can etch onto a silicon wafer is approaching the point where, you know, there just aren’t many atoms left to play with anymore. But the true innovation, the semiconductor itself and the notion of sequential lithography shrinking as a means to improve performance, was understood half a century ago and the industry has been riding that horse ever since. The fact that a very good (but old) idea is bumping into physics is not a sign of slowing progress or innovation.

To some extent, the semiconductor industry and the industries who benefit from semiconductors have been able to lazily rely, year-after-year, on the steady improvements in power / performance delivered by lithography shrinks. So now what do we do? Well, the very simple answer is make more and bigger chips. The fact that you can only pack so many billions of transistors into a square mm doesn’t say that you can’t add more square mm to get more billions of transistors. It doesn’t say that you can’t put another chip next to the first chip. There are other problems, heat, loss through communication, more complex packaging, etc. that emerge, but those are new opportunities to improve performance. And the incredible amount of money being invested in increases in semiconductor manufacturing capacity around the globe, perhaps for geopolitical reasons but maybe with collateral benefits, should make all of this silicon cheaper than ever before.

You can also change the functionality that you design on a chip to improve performance. Because of lithography shrinkage, the best vector for performance improvement has been the general purpose computing device, programmed through software to execute specific tasks, shrunk as quickly as possible to a smaller transistor size. But that doesn’t have to be the case at all, and we will see more-and-more custom semiconductor development and purpose-built semiconductor designs going forward, because design will become a better vector for improved performance than simple lithography shrinks. The biggest issue here is that students aren’t studying semiconductor design in large enough numbers, but that presumably will evolve.

And he also doesn’t take into account what people *do* with the semiconductor capacity they already have. First off, the centralization of massive computing power into the cloud means that each cycle of compute power is available to be rented out when it’s needed, rather than sitting idly in a corporate datacenter. That on its own represents a tremendous improvement in the efficiency of use of what we already have. Second, what programs are you going to write to run on the computing capacity you already have? It matters a lot. People will use the existing resources to do ever more interesting things, the explosion in AI/ML applications being the most hyped (probably justifiable).

So, in sum, I’ll bet on progress. The question should be, rather, how do we bring greater innovation to areas where there could be tremendous returns for humanity, e.g., energy production, education, human health and well-being, and so on? How do we uncover the lessons of areas where profound progress has been made (e.g., the semiconductor industry, for one) and transfer as much of what we learn from those areas to other areas where progress is needed? That’s the interesting question, rather than bemoaning a supposed “lack of progress” based on some analysis of patents. Please!

Expand full comment
26 more comments...

No posts