People get mad at printers and AIs and think that is about products and services. But if you've ever supervised and managed a team of human beings to get them to do the work and coordination you wanted them then to do, you end up just as frequently frustrated, maybe more so, sometimes also literally yelling in their faces. Effective Leadership for collective human action is also a maestro skill, and also a "thrill of victory, agony of defeat" combo of feelings.
I've called the abstract skill of working out the conceptual plan and communicating it for these scenarios "Project Elocution". Those who are particularly good at Project Elocution in some field are the maestros of their respective domains.
"if you've ever supervised and managed a team of human beings to get then to do the work and coordination you wanted them then to do, you end up just as frequently frustrated" especially in academia!
It's also been my experience that you can write good code with AI's by using recursive self-evaluation.
The more powerful reasoning AI's on high power are especially good at this.
Codex is especially powerful at finding logical flaws in code and regularly surprises me with the detail it's able to catch. I'd recommend anyone willing to try to have codex 5.2 xhigh review the work done by claude code when you're finished with something, and then feed it back to claude to have it improve its own work.
Automation leads to the loss of knowledge as people no longer need to learn the skills necessary to make things.
What is the future of computer / application programming if people no longer are pressed to learn the skills? What is the future of expert knowledge if expertise transitions to asking a machine?
You talk about caring about the world your grandkids will have. Does it concern you your grandkids may live in a world where the expertise that makes things work is possessed by a vanishingly small number of people?
Does it concern you that mankind is on the path of being wholly reliant on machines? What happens to knowledge if the machines are destroyed (or even go offline)?
That is about as pessimistic and pointless an observation as can be made.
We have been reliant on tools and machines since the stone age. Crows and other animals are known to make and use tools. Is that a pending disaster?
Progress consists in part of trading some knowledge for other knowledge. Would you have humanity refuse to ever learn anything new which necessitated forgetting something older and less valuable?
90% of people used to have intimate knowledge of farming; now it's 1% or 2%. What knowledge has been lost?
Sailors used to know more about the sea and weather than almost everybody today; what knowledge has been lost?
Britons 100 years ago used to know 20 shillings make a pound, 21 make a guinea, and 12 pence make one shilling; how many today could make change or compare values in that system as easily as their ancestors? What knowledge has been lost?
Lack of expertise created the Flint Michigan water crisis.
Lack of real world knowledge is creating a electricity supply crisis in the US as politicians blindly plow ahead with ideas that simply don't work.
Europe is facing the same issue of elevating ideology over reality. This is happening not just due to politics but because people with political power and those who support them lack understanding of how things actually work.
“I think that AI will bifurcate the software development world. There will be amateurs like me, who do vibe coding. And there will be a new form of professional developer who conducts coding agents the way a maestro conducts an orchestra.”
Great observation that aligns with my experience – in my workplace there are people who are trying to use LLMs to automate / replace people, and then there are others who use it to outsource specific, discrete steps in their workflows that are time-consuming but relatively low-value (that is, in the rough category of “the configuration problem”). The results of the former group are pretty unsatisfying and uneven, whereas the results of the latter group can be pretty remarkable in terms of time-to-high-quality result. The first group is really focused on low-level work performed by relatively low-skill people, where the second group tends to be made up of highly experienced professionals. I don’t quite know what conclusions to draw from this, but it’s a phenomenon.
> When the problem is soled, you experience victory.
I can't resist. Is this when you kick the printer, or throw a shoe at it?
🤣
People get mad at printers and AIs and think that is about products and services. But if you've ever supervised and managed a team of human beings to get them to do the work and coordination you wanted them then to do, you end up just as frequently frustrated, maybe more so, sometimes also literally yelling in their faces. Effective Leadership for collective human action is also a maestro skill, and also a "thrill of victory, agony of defeat" combo of feelings.
I've called the abstract skill of working out the conceptual plan and communicating it for these scenarios "Project Elocution". Those who are particularly good at Project Elocution in some field are the maestros of their respective domains.
"if you've ever supervised and managed a team of human beings to get then to do the work and coordination you wanted them then to do, you end up just as frequently frustrated" especially in academia!
You missed one! https://hollisrobbinsanecdotal.substack.com/p/llm-poetry-and-the-greatness-question :)
It's also been my experience that you can write good code with AI's by using recursive self-evaluation.
The more powerful reasoning AI's on high power are especially good at this.
Codex is especially powerful at finding logical flaws in code and regularly surprises me with the detail it's able to catch. I'd recommend anyone willing to try to have codex 5.2 xhigh review the work done by claude code when you're finished with something, and then feed it back to claude to have it improve its own work.
@Arnold Kling Is this a mistake?
Arnold,
Automation leads to the loss of knowledge as people no longer need to learn the skills necessary to make things.
What is the future of computer / application programming if people no longer are pressed to learn the skills? What is the future of expert knowledge if expertise transitions to asking a machine?
You talk about caring about the world your grandkids will have. Does it concern you your grandkids may live in a world where the expertise that makes things work is possessed by a vanishingly small number of people?
Does it concern you that mankind is on the path of being wholly reliant on machines? What happens to knowledge if the machines are destroyed (or even go offline)?
That is about as pessimistic and pointless an observation as can be made.
We have been reliant on tools and machines since the stone age. Crows and other animals are known to make and use tools. Is that a pending disaster?
Progress consists in part of trading some knowledge for other knowledge. Would you have humanity refuse to ever learn anything new which necessitated forgetting something older and less valuable?
90% of people used to have intimate knowledge of farming; now it's 1% or 2%. What knowledge has been lost?
Sailors used to know more about the sea and weather than almost everybody today; what knowledge has been lost?
Britons 100 years ago used to know 20 shillings make a pound, 21 make a guinea, and 12 pence make one shilling; how many today could make change or compare values in that system as easily as their ancestors? What knowledge has been lost?
Lack of expertise created the Flint Michigan water crisis.
Lack of real world knowledge is creating a electricity supply crisis in the US as politicians blindly plow ahead with ideas that simply don't work.
Europe is facing the same issue of elevating ideology over reality. This is happening not just due to politics but because people with political power and those who support them lack understanding of how things actually work.
No, government bureaucrats created both those problems, because they do not want to solve the problems that created their jobs.
If government is involved, one need look no further for the source of the problems.
Read the Wikipedia entry for Love Canal. Or read this longer report: http://reason.com/archives/1981/02/01/love-canal/singlepage
“I think that AI will bifurcate the software development world. There will be amateurs like me, who do vibe coding. And there will be a new form of professional developer who conducts coding agents the way a maestro conducts an orchestra.”
Great observation that aligns with my experience – in my workplace there are people who are trying to use LLMs to automate / replace people, and then there are others who use it to outsource specific, discrete steps in their workflows that are time-consuming but relatively low-value (that is, in the rough category of “the configuration problem”). The results of the former group are pretty unsatisfying and uneven, whereas the results of the latter group can be pretty remarkable in terms of time-to-high-quality result. The first group is really focused on low-level work performed by relatively low-skill people, where the second group tends to be made up of highly experienced professionals. I don’t quite know what conclusions to draw from this, but it’s a phenomenon.
Consider adding Nathan Lambert's Interconnects substack to your list of AI readings - he has a very good piece today on using multiple models: https://www.interconnects.ai/p/use-multiple-models
Good stuff. And there is an interesting paper here on AI impacts: https://hugoreichardt.github.io/pdf/tstc_compadvantage.pdf
Good post.
Solved, not soled.
"This is one the side of the vibe-coding approach,"