Some AI Links
Alexandr Wang wants your kids to code using AI; Julian Schrittweiser wants you to think exponentially; Ethan Mollick uses AI for a replication task; Cyril Hedoin foresees a cognitive aristocracy
A story featuring Alexandr Wang says,
MIT dropout and AI billionaire Alexandr Wang, 28, says that the key to getting ahead is learning to use AI code creation tools — and he recommends that all teens get up to speed with using them.
“If you are, like, 13 years old, you should spend all of your time vibe-coding,” Wang said on the podcast. “That’s how you should live your life.”
He mentioned that if teenagers spend “10,000 hours” getting familiar with AI coding tools, “that’s a huge advantage.”
…Wang’s remarks echo those made by Jensen Huang, the CEO of Nvidia, the world’s most valuable company. Huang said in June that it no longer matters if someone never learned how to code — “there’s a new programming language” called natural language that they can use to prompt AI.
The article does not tell the other side of the story, which I come across often, which is software engineers saying that AI’s are not all that good at coding, really. In other news, drivers of horse-drawn buggies are saying that cars are not that good at taking people places, really.
On my post predicting that computer science will be useless knowledge, I got a lot of pushback from software engineers to the effect of “I need CS sometimes to do my job.”
I think that they are being short-sighted, if not completely blind. “Your job” may not exist if the AI’s keep getting better at software development. The AI will incorporate all of the necessary knowledge. Software development will consist of prompting the AI, not solving intricate problems in designing and coding. The question is not whether CS is useful to you. The question is whether AI’s will ever get to the point where your CS knowledge has no value to add. Even if we never reach that point, along the way there will be a decline in software engineering jobs where CS adds value and an increase in software engineering jobs that involve different skills, more akin to vibe-coding.
Given consistent trends of exponential performance improvements over many years and across many industries, it would be extremely surprising if these improvements suddenly stopped. Instead, even a relatively conservative extrapolation of these trends suggests that 2026 will be a pivotal year for the widespread integration of AI into the economy:
Models will be able to autonomously work for full days (8 working hours) by mid-2026.
At least one model will match the performance of human experts across many industries before the end of 2026.
By the end of 2027, models will frequently outperform experts on many tasks.
It may sound overly simplistic, but making predictions by extrapolating straight lines on graphs is likely to give you a better model of the future than most “experts” - even better than most actual domain experts!
Note that the straight lines are not linear extrapolations. The scale is logarithmic. Pointer from Alexander Kruel
I gave the new Claude Sonnet 4.5 (to which I had early access) the text of a sophisticated economics paper involving a number of experiments, along with the archive of all of their replication data. I did not do anything other than give Claude the files and the prompts “replicate the findings in this paper from the dataset they uploaded. you need to do this yourself. if you can’t attempt a full replication, do what you can” and, because it involved complex statistics, I asked it to go further: “can you also replicate the full interactions as much as possible?
…Even small increases in accuracy (and new models are much less prone to errors) leads to huge increases in the number of tasks an AI can do. And the biggest and latest “thinking” models are actually self-correcting, so they don’t get stopped by errors. All of this means that AI agents can accomplish far more steps than they could before and can use tools (which basically include anything your computer can do) without substantial human intervention.
I know, I know. You would never trust an AI agent. In 1910, cars were unreliable, too.
Literacy will recede and, consequently, people will become less intelligent. That may be true on average, but it doesn’t mean it’s true for everyone, nor that it’s necessarily bad for society…
What is more likely to happen is the emergence of what I would call a new cognitive aristocracy: a subgroup of the general population that maintains or even improves their literacy skills, possibly with the help of AI, to fulfill relatively scarce but highly important intellectual functions in society.
That is a scenario worth contemplating. People who want to remain sharp and skilled will figure out how to use AI to improve themselves. Others will just wallow in entertainment and fall behind cognitively. This scenario is sort of a combination of Tyler’s Age of the Infovore and Average is Over.
substacks referenced above:
@
@
Prof. Kling, I love your work but you have a pretty uncharitable take on the software engineers who responded to your last post about the value of CS.
This isn't "horse carriage makers" objecting to the car. Many of us use AI tooling every single day and even build this tooling.
And from that daily use it is obvious that, unless something radical changes, these are not drop-in replacements for engineers. It is a lot of work to deploy AI in any vertical and to tune even the best models to a codebase. They produce buggy garbage code en masse without major guidance by their human operator. (But they are still awesome and the future of coding).
Is it possible that the optimistic exponential solves this problem? Of course.
But it's silly to believe that a leveling off is impossible, even if it might not be happening yet.
It's also silly not to learn from the earlier deployments of Deep Learning in fields like radiology (more radiologists employed now, despite wide use of high accuracy computer vision models).
And if our thoughts are marred by being "horse carriage makers" then surely the stock-option-holding Anthropic employee you quote as the authority on exponential progress curves might have some mixed incentives?
In your post you were talking about whether current college students who'll be looking for jobs in four years or less should learn CS.
"I'm going to bet that AI will be so good in 4 years that vibe coding is all you need" seems like a very high risk decision, and insuring against the possibility that you're wrong by learning some foundational CS is pretty low cost given that you're already in college.