21 Comments
User's avatar
Dave Friedman's avatar

Looks like the link is broken. This appears to be the correct one: https://www.nationalaffairs.com/publications/detail/ai-era-computing

Expand full comment
Roger Sweeny's avatar

Yes, it is. Thanks.

Expand full comment
Peter Saint-Andre's avatar

Not only is the original link bad, it even seems to be some sort of attack (or at least a worrisome copy and paste).

Expand full comment
Peter Saint-Andre's avatar

Thanks for a reasonable, balanced discussion of the issues.

As to this: "A good writer who cannot explain his process will be frustrated by models' inability to meet his own standards of composition. An average writer who can give good meta-instructions will be spectacularly more productive."

It seems more likely that those who are really good at writing or any other task already understand and thus can explain their processes, whereas those who are average cannot. The implication is that those of middling insight and intelligence will be displaced from their jobs as writers, programmers, graphic designers, researchers, radiologists, etc.

Expand full comment
Andy G's avatar

I disagree, at least with the strong form of what you write.

As a trivial example, those with excellent skill at using spreadsheets but unable to multiply 2 four digit numbers in their heads are in fact far more productive than those who can do that multiplication but never master the use of spreadsheets

Those of middling physical strength can move more rocks with a bulldozer than the strongest man in the world using a shovel.

What Arnold is suggesting here is more of the same.

More sophisticated tools properly understood and used will yield superior results over those by people who were more capable in a world with less sophisticated tools who don’t properly take advantage of the new tools.

Expand full comment
Peter Saint-Andre's avatar

Your experience differs from mine. I have never known an amazing programmer or writer who didn't intensely study the history and theory of their craft, and thus deeply understand what they were doing. It's the midwits who have the most to fear from AI.

Expand full comment
Andy G's avatar

We are only partly disagreeing.

The most capable people who learn the newest tools will of course be the most productive.

The least capable people will not be able to learn the new tools - at least not well - and will do poorly.

But within the midwits there is a fairly broad range IMO. And those of them dedicated to learning new tools I’m with Arnold will do better than those with greater natural intelligence who do not.

So I don’t disagree with you that tools alone will make the average midwit “amazing”, but I think they could indeed bring the dedicated ones to the 80th percentile, or perhaps even 90th percentile, compared to *today* sans tools.

That portion of midwits I believe can do very well.

Most otherwise brilliant AI-Luddites IMO will not do as well.

But I am not suggesting that an AI-enhanced midwit would make it to the 99th percentile. So perhaps we are talking apples and oranges.

Expand full comment
Peter Saint-Andre's avatar

Prediction is hard, especially about the future. :-)

Your nuanced guesstimates seem reasonable. There will be many factors involved, paths might diverge based on industry (e.g., because of regulation), etc.

Interesting times!

Expand full comment
Peter Saint-Andre's avatar

P.S. I'm pretty much an AI Luddite (brilliant or not isn't for me to judge): https://beautifulwisdom.substack.com/p/ai-and-i

Expand full comment
Andy G's avatar

Art is indeed different.

And not having any of one’s own art directly generated by “AI” is one thing.

And I’m surely not interested in reading AI-generated Substack-type -pieces today, nor any time soon.

Though I can conceive of this being possible further in the future.

Especially if the output is customized for me.

…which is in fact what LLMs do today - although not yet spectacularly - when given a set of well-crafted prompts.

But back to you, not having the output directly generated by AI is different from being a full-on Luddite.

I strongly suspect you will take advantage of AI-based tools to continue to learn about the world and inform your output.

Just as we both know you take advantage of digital tools and the Internet to do so today. 😏

Expand full comment
Doctor Hammer's avatar

I hesitate to agree with the statement that those who are really good at a task can explain their process. I find it is often the case that being really good at something almost requires gaining a very strong set of implicit knowledge, which itself cannot be explicated to someone else. It is the experience of "just knowing" how to do something that seems impossible, or having those hunches on can't explain via the bits of explainable evidence others can see. That seems to be what takes most things from a science where you go through certain steps and get an expected outcome and an art where you just have to have the sense of how to do it.

Expand full comment
Alex's avatar

This doesn't seem to be true for literally any famous high-skill writers I know!

Expand full comment
cdh's avatar

I don't want to underestimate AI and robotics, but if you watch any YouTube construction video, it's hard to believe AI will be able to do those types of physical tasks anytime soon. Of course, I've been proven wrong many times before.

Expand full comment
Tom Grey's avatar

Great job, Arnold, in explaining your thesis of LLMs as computer interface, tho only David F’s link worked. You’re right about users of ai as tools soon to be the dominant producers/ investors.

Thanks for noting the prophetic Young Lady’s Illustrated Primer as a likely model for personalized ai tutors. It will likely be first done in English as a Second Language, the single most paid for subject of education in the world, tho mostly outside the US. The EU has 9 levels, from A1, 2, 3, B1, 2, 3, to C1, 2, 3.

There already exist many lessons, and tests to determine mastery—when I was at IBM I wanted Watson* to start teaching English, they chose psychology instead.

The student starts out with limited known knowns, and a whole other language vocabulary of known unknowns to learn. In all subjects, but more so in any second language, the students’ unknowns, their questions, are known to the teacher tutor.

Later in college in other subjects, there will be an increasing number of questions the teacher doesn’t have the answers to. But in asking the Q, like Is Faster Than Light travel possible, it becomes a known unknown. The hope of AGI is find true answers.

K-12 will allow ai tutors focusing on increasing the knowledge of the students to learn more faster, when motivated. Health care improvement might be slower, because the knowledge isn’t already there: what is wrong with this patient, what is the best treatment for this patient? Two known unknowns for the experts, tho ai might quickly become great at diagnosis, what’s wrong.

On tech, I expect to see very limited ai agents that fit in phones and serve their masters without often needing to access a huge expensive general purpose ai. Agents, servants, is what lots of folks often want, and will pay for.

Most Econ rewards will be going to owners of capital, most folks don’t own much capital. We will need more capital redistribution, thru new taxation policies.

Expand full comment
Andy G's avatar

I like the National Affairs piece very much, even if it mostly covers ground we have read on this Substack.

Re: education and especially healthcare, it seems to me that structurally it would be possible to get a lot higher productivity - via increases in education quality and healthcare quality (both quality of life and longevity) without there necessarily being massive reductions in the overall workforces in those sectors.

Even in this rosy (for employment) scenario, some jobs would be eliminated, and most jobs would change, often significantly.

But the point is that merely modest reductions in employment - and spending - in these two sectors combined with major improvements in output quality would deliver significant improvements in national and global living standards.

I’m not suggesting what I describe is probable or even likely; merely that it is distinctly *possible*, while rarely being discussed.

Expand full comment
Invisible Sun's avatar

Concerning education and healthcare, I believe software could transform those industries and could do so immediately. The obstacle is greed. In healthcare, the gatekeepers care about maintaining their profit margins. So they welcome technology but only when it can be priced such that it sustains the elevated cost levels that exist in the industry!

What is needed to transform healthcare is for government to enable new entrants into health care who are not beholden to the legacy cost structure! Having Microsoft and Oracle sell expensive software licenses doesn't do jack. It simply adds to the costs of healthcare!!!

With education we will see the university system be transformed as society comes to realize that expert knowledge can be obtained without going to college. But public education shows no promise. Public education is not about education and has not been about education for a long time. Software will not make public education better! What we can hope is that software and policy changes will allow people to get their kids educated outside of the public education system, and then the public education system can die.

Expand full comment
Roger Sweeny's avatar

Oh, please. Public education doesn't work because it is trying to do the impossible. It is trying to take 5-18 year olds and get them to learn things, most of which they have little intrinsic interest in. It can get them to remember enough to pass a test after the unit, but not to remember much past that because they just don't care and they don't see it as useful in their life ("Mr. Sweeny, when am I ever going to use this?").

Any educational system, with any technology, is going to smash up against that wall and fail. Until we re-define education as something other than remembering "four years of English Language Arts, four years of social studies, three years of science, ..." we will continue failing.

Expand full comment
Tom Grey's avatar

Mostly true, tho I’m pretty sure more significant external motivation would get a lot more kids to learn a lot more. Like about $10/ day in student awards available for learning lesson chunks, with bad behavior causing immediate loss. ($2,000/yr).

I’ll continue to believe this would be among the optimal motivations for avg IQ students until some city or state tries it and it fails.

The real impossibility is the attempt to augment gene limited IQ so that all are above avg college ready.

Expand full comment
Andy G's avatar

“The obstacle is greed.”

There are many obstacles. But particularly in healthcare, if greed is on the list, it is way down said list.

The biggest problem in healthcare is government regulation. We have nothing like a free market in healthcare.

The biggest problem in education is probably government, too - the fact that it runs the schools with a quasi-monopoly in most places. The fact that it massively subsidizes university education, having taken over student loans. Though in K-12 education, I’ll agree that greed - in this case, of the teachers unions - is no doubt a factor. But not the biggest one.

If you think that “greed” is a major factor in healthcare, given that you are reading a blog by an economist, I suggest you go learn some more about Adam Smith.

Expand full comment
Invisible Sun's avatar

Mr. Kling's article is well written and informative and I believe accurately reflects how computers enable human productivity. Where the massive investment in CPU power matters is in having software that can scale to near infinite levels of analysis and draw on ever increasing amount of data.

https://www.nationalaffairs.com/publications/detail/ai-era-computing

However, computer software is a model of reality. Where the AI hype misses the mark is in failing to appreciate that the mismatch between the model and reality is amplified the more one relies on the model to emulate human activity. So the last margin of gap that AI needs to solve to "replace humans" may never be solved. The nuances of the human experience are such that getting the model right will prove very costly and frustrating.

And why? This is my complaint and criticism of the AI hypers. Why do you want to replace humans? Sure, we want machines to make human effort more productive. But how is "replacing humans" a desirable goal and what does that even mean?

Technology has allowed humans to become much more productive at making food. Why make food? Because humans eat food! Technology has greatly enhanced the ability of humans to communicate with each other. Why have they done this? Because humans like to communicate!

Every useful technology exists because it is useful to humans! So when the AI hypers talk about replacing humans it occurs to me that they don't understand that without humans technology is not useful and not needed!

My prediction: The LLM bubble will collapse. What will replace LLM will be software that seeks not to "replace humans" but aims to provide humans the information and solutions that is useful to their growth and progress.

Expand full comment
Alexander van Olst's avatar

Link to the article doesn't seem to work for me

Expand full comment