31 Comments
User's avatar
zach.dev's avatar

Prof. Kling, I love your work but you have a pretty uncharitable take on the software engineers who responded to your last post about the value of CS.

This isn't "horse carriage makers" objecting to the car. Many of us use AI tooling every single day and even build this tooling.

And from that daily use it is obvious that, unless something radical changes, these are not drop-in replacements for engineers. It is a lot of work to deploy AI in any vertical and to tune even the best models to a codebase. They produce buggy garbage code en masse without major guidance by their human operator. (But they are still awesome and the future of coding).

Is it possible that the optimistic exponential solves this problem? Of course.

But it's silly to believe that a leveling off is impossible, even if it might not be happening yet.

It's also silly not to learn from the earlier deployments of Deep Learning in fields like radiology (more radiologists employed now, despite wide use of high accuracy computer vision models).

And if our thoughts are marred by being "horse carriage makers" then surely the stock-option-holding Anthropic employee you quote as the authority on exponential progress curves might have some mixed incentives?

Expand full comment
gas station sushi's avatar

“We decide what to believe by deciding who to believe” is a famous meme from Professor Kling.

I don’t have a horse (or buggy) in this race, but I’m thinking that those who are *actually* engaged in the profession of software development are probably more credible in assessing the quality of AI in a software development environment than the uncharacteristically cocksure Kling who probably hasn’t coded anything of significance since the 90s.

Thus, I’m with the software developers here over cocky Kling.

Expand full comment
Lex Spoon's avatar

In our strange times, it does not feel too far to say this is a time of dreams. You can increasingly think of something and have a version of it show up that is at least passable.

(Assuming, that is, we can agree to enjoy it rather than to kill each other.)

It is interesting to think how to go beyond "passable". Arnold, your expertise and way of thinking are immeasurable for extrapolating how this can go. When I try, it seems like a human is helpful in places where the marginal improvement is to make something better for humans, or for a particular human. This is where a person has a comparative advantage.

If we start from there, that the jobs for a human have to do with being inside a human head, then that's not enough by itself. That advantage has to be translated into a better result.

It seems like that has to come from giving guidance to an AI, which is where all the stufff about knowing the domain is valuable. You can get a good meal at a restaurant by just saying "feed me, please", but you'll get a better result if you know that dessert is separate from the main meal. And if you know more, you can do better at asking for more--though at risk of micro-managing.

For software, the building blocks are things like paging, data structures, core algorithms like sorting, RPC calls, clustering, and several practices related to security. You get 80% of those by taking that database implementation class that started all of this discussion. You get almost the same 80% from a compiler class or an operating systems class.

If you look at the sustain pedal on a piano, to come at this from a different domain, the sustain pedal makes simple piano playing even simpler, but it makes top-end playing harder than it ever was. It feels like AI must go this way for existing human jobs that continue to exist in some form.

And as for whether piano players have a future--not that it's a great job right now :)--it will have to do with the idea from Xkcd from a long time ago. Human's will have an edge when doing a good job means needing to be inside a human's head. It's the only thing only we can do.

https://www.explainxkcd.com/wiki/index.php/1002:_Game_AIs

Similarly, why ever listen to a live musician, especially second-tier ones, when perfect recordings exist. There is something special about interacting with another human, and it seems like this is where humans will find value in hiring each other to help out.

Expand full comment
Cinna the Poet's avatar

In your post you were talking about whether current college students who'll be looking for jobs in four years or less should learn CS.

"I'm going to bet that AI will be so good in 4 years that vibe coding is all you need" seems like a very high risk decision, and insuring against the possibility that you're wrong by learning some foundational CS is pretty low cost given that you're already in college.

Expand full comment
Cinna the Poet's avatar

If things get to the point that you're talking about, then they should stop requiring students to learn foundational CS.

Expand full comment
Yancey Ward's avatar

Who should stop requiring students to learn foundational CS? If Kling is right, employers not requiring would be correct- but should schools like Cal Berkeley do this?

Expand full comment
Yancey Ward's avatar

This gets to the heart of what I claimed in the earlier post- that Jain and Kling were arguing about different topics. Jain was defending the teaching of the fundamentals of CS in order to get CS degrees from the university while Kling was arguing that future workers in industry wouldn't need CS degrees to do CS work. Both might well be correct but Jain certainly is correct- if you want the degree, demonstrate that you understand the intellectual foundations of the subject.

Expand full comment
Slowday's avatar

It should be noted that there already are plenty of working programmers with incomplete, poor or no degrees at all. They might therefore not know the finer points, of course. Then again, every 5 years or so the industry paradigm, if you will, changes to a lesser or greater degree and one has to learn all over again or get discarded.

Expand full comment
Yancey Ward's avatar

I worked in a field where everyone had to have the college issued degree-synthetic organic chemistry- so I am mostly ignorant of what companies require for CS workers.

Expand full comment
Slowday's avatar

CS, or rather, IT is a crazy field and the formal requirements depend on what you're working with. Often job listings require certain technologies or programs rather than anything deeper. There has always been a great appetite for cannon fodder. At the moment, the appetite seems small, however.

One might liken it to being a trader on Wall Street (of the old school): if you make money, nobody cares about your background.

Expand full comment
Cinna the Poet's avatar

They meaning CS faculty designing their majors.

Expand full comment
Todd's avatar

The scenario sounds a lot like "The Diamond Age."

I'm also a believer in the widespread shift to vibe-coding. Perhaps it's only good enough for toy projects & critical systems with huge scale still require close supervision and maintenance from a team of brilliant coders. Perhaps, also, we will find that as more and more people are able to create "toy projects" for their precise needs, there are fewer and fewer critical systems of huge scale that people care to have maintained.

Expand full comment
CW's avatar

Nassim Nicholas Taleb's dyad of Mediocristan and Extremistan seems marginally useful when a billionaire says everyone should live in Extremistan. At least Taleb said you should live in Mediocristan if you want to avoid being in an invisible graveyard where you have wasted your time chasing the ends of rainbows.

I know next to nothing about computer science and can't see the future. My preferred metaphor would be the Farrier. Peak US horse population 25 million in 1910, and horse population 7.6 million in 1950. You probably shouldn't bet on a career as a farrier in 1909. That being said if you really love metallurgy and horses you can get a job as a farrier today. Might even be able to make a reasonably good living servicing rich folks horses.

Although the horse population itself went through an incredible boom and a Jevons paradox with the advance of the railroad. It wasn't until the automobile that both the railroad and the horse were cast aside, and yet both are still in existence and usage today. I sometimes try badly to think about AI/LLMs as a general-purpose technology within this framework and think about what will boom and then eventually be cast aside. Perhaps CS jobs will be one such case. Not sure I would personally bet on it. Changes in time and their speed are a funny thing.

I suppose the metaphor falls down immensely and is complete crap if the amount of computer science explodes where as the number of horses was always going to fall with technological advancement. And the entire categorical framework is wrong with regard to being about transportation general purpose technologies where as these new general purpose technologies are more cognitive and akin to reading and writing itself as GPTs.

Expand full comment
Kash's avatar

The idea that teens should spend 10,000 hours vibe coding seems insane to me, every demo I see is an app that is impressive at first but no one actually uses. Obviously it will get better, but why would all this practice create a valuable skill?

Expand full comment
Yancey Ward's avatar

The ability to "create an app that is impressive but no one actually uses" can be used to create an app that is impressive and people use.

Expand full comment
Roger Sweeny's avatar

"This scenario is sort of a combination of Tyler’s Age of the Infovore and Average is Over."

And Michael Young's Rise of the Meritocracy (1959). Young was an English left-wing sociologist. In the meritocracy, IQ + effort = merit. People like that become rich and powerful. Those who aren't know they can never be either. He saw moral and practical problems with this.

Being a smart person, I did not like the book when I first read it. I might think better of it now.

Expand full comment
Scott Gibb's avatar

There will be many more inaccurate predictions about LLMs than accurate ones. Likewise, there will be more pessimistic predictions than realistic ones. Therefore, I will mention positive aspects of LLMs to steer our thoughts toward accuracy and realism. The best part about LLMs is that they help us learn things more quickly and easily. I want people to have more knowledge so the world is a better place. The best way for people to learn is through self-learning. People are much more likely to trust an LLM than me. I can't control how people learn, how many books they read, what they read, and how they spend their time. I have a few dozen active subscribers--almost no impact on the world outside my family. Arnold has over 10,000. But an LLM might have hundreds of millions of users. Each of these users faces unique problems that can easily be solved using LLMs. That's great news. If the cost of learning falls dramatically because of LLMs, things will probably get better, politics might get better. Wouldn't you love to see politics get significantly better? Here's one example. Let's say I'm getting ready to vote and I have questions about a candidate. I ask my favorite chatbot questions, I get answers, and I vote better. Multiply this scenario by millions and it seems my family is significantly better off. And it didn't cost me much, maybe anything.

So I see a more orderly world coming. And if we're living in a more orderly world, we'll likely have more time to read books. So, I'm optimistic about LLMs.

Can we come up with a new name for it yet? LLM is clunky. AI seems inaccurate. Chatbot seems too unsophisticated.

Next steps, let's spread the good news about this technology. Let's get better at using it and teach people how best to use it. Perhaps the most important aspect that we can teach about using LLMs is when to accept its suggestions. When I use it to edit my essays, I face dozens of small editorial suggestions. Do I accept this one? Yes, if I think the suggestion is better. No, if I think the suggestion is inauthentic. I want my essay to be unique to me. So we should discuss goals of authenticity and fidelity. We want to use AI to better our work and I'm improve out learning, while maintaining fidelity and authenticity to who we are. How important is it to you that your work is unique? To be unique is to have faith in yourself. So perhaps most importantly, we want our children to be themselves--confident in who they are, poised, yet open to being wrong, open to improving and learning. This seems like a good problem to have.

Expand full comment
MikeW's avatar

Your example of voting better by asking a chatbot questions seems off to me. Politics is one of the weak points for chatbots because of all the political bias they are trained on (and the political bias of the people developing them).

Expand full comment
Scott Gibb's avatar

Okay. I have a feeling that Arnold Kling and Tyler Cowen and Russ Roberts and Mike Munger could develop a fairly unbiased chatbot about Donald Trump and Kamala Harris. Maybe you could too? If you can do it, then others can too.

Expand full comment
MikeW's avatar

I think it's like Wikipedia... it's a very good reference for a lot of things, but not dependable when it comes to politically-charged topics.

Expand full comment
Scott Gibb's avatar

But you can change it. Can't you? You, and I, and maybe others. We can decide to change it.

Expand full comment
Chartertopia's avatar

Change Wikipedia's ingrown bias? I tried updating a couple of small articles with good references for the corrections, and got flailed for stepping out of my lane. They weren't even political articles or political corrections. Someone "owned" those articles and did not like "outsiders" messing with his work. I no longer remember what the articles were, and haven't touched any since, not even for grammatical corrections.

Expand full comment
MikeW's avatar

Good luck to you. I'm feeling like the old dog who can't learn new tricks.

Expand full comment
Scott Gibb's avatar

Ha. And I don't care enough I suppose. Good luck to whoever attempts to change it.

Expand full comment
MikeW's avatar

I don't think any of them are working on developing chatbots. I believe the actually existing ones have been shown to be politically biased. Usually towards the left, but maybe there are exceptions?

Expand full comment
Tom Grey's avatar

I want my college freshman son to do more vibe-coding, but he wants to study CS. More useful than history, tho he enjoys spending hours daily on computer WW II & medieval war games. The prof gives problems, son exercises his mind (in the old fashioned way) and solves them—I’m happy he’s exercising his thinking. Yet 10,000 hours is almost certainly wrong, spending more time doing other things & learning how other people live & even practicing doing what the prof wants, is better for a future where one is doing what the boss wants.

A huge amount of the valuable stuff that programs can do, has been done. And now needs maintenance & updating. So far hard to get ai to that, but ai built replacement is surely coming to some of the old programs, and then more of them. My guess is that some expert vibe coders, along with older domain experts, will be on some small Agile team project to replace/ recreate the old still-functioning program.

Those now in CS willing to also improve their deeper understanding by careful self-coding, will have some advantages; those willing to vibe code with ai to get bug free code that solves the problem will have different advantages. Likely be more in demand from bosses who have the more usual problems, while the code-guru folk will still be occasionally very very valuable to solve problems the ai fails to solve. But this is a shrinking subset. The most valuable of the gurus will be those able to solve the real tough problems, which seldom occur, along with doing other, more usual productive work that includes ai agents.

The ai agents will always also be tools, and the ai-driving tool users are likely to do very well.

In MBTI terms, a lot more useful programs are going to be made by high IQ but non-abstract SP tool experts who enjoy the power of using the tools. But in very concrete, non-theoretical ( nor likely elegant) programs made that do the work.

Expand full comment
Richard Heyduck's avatar

I'm neither a software nor an automotive engineer, merely an end user of each. I suspect it is simpler to identify an "unreliable automobile" than an "unreliable AI coder."

Expand full comment
Slowday's avatar

How soon until an AI can set the Fed interest rate at least as well as the usual collection of humans? And will that be before or after CS has been delegated to the trash heap? Any bets?

Expand full comment
John Hall's avatar

I do not think drivers of horse buggies is a good comparison to software engineering. A better comparison is a high skill tailor making bespoke suits versus a lower skill person with a sewing machine making everyday clothes. We still have high skill tailors but maybe fewer than before. We have lots more clothes than we used to have.

Expand full comment
BenK's avatar

Aristocracy is an interesting term. I'm not quite sure that this will fit because of the concept of inheritance. What will be the roles of imagination, charisma, effort, connections, locality, ... and what will be the Z factor? For a while, it was the ability to 'empathize' with men, or written materials, or with numbers, or machines, and then with computers, and statistics... will we have a separate skill for working with AI, or will it simply make some other skills fall off and allow another one to bubble up?

Expand full comment
Invisible Sun's avatar

"People who want to remain sharp and skilled will figure out how to ... to improve themselves."

Kling is literally applying the old school wisdom to the modern day. Is it true wisdom? It can't hurt. Yet it does make one wonder if the observation "AI smart" will be the 21st century version of the 20th century label "book smart "

Expand full comment