29 Comments

What was hard to define thus hard to automate thus a comparative advantage for humans was precisely that capacity to understand and work with humans. This is the last nail in the coffin of those careers.

"Well then I just have to ask why can't the customers take them directly to the software people?" - "Well, I'll tell you why, because, engineers are not good at dealing with customers." ... "What would you say you do here?" - "Well--well look. I already told you: I deal with the god damn customers so the engineers don't have to. I have people skills; I am good at dealing with people. Can't you understand that? What the hell is wrong with you people?"

Expand full comment

I wonder how long it will take for a teenager to make a movie with the production value of Avatar or a comparable big budget film using just a home computer and some AI. I'm thinking of those graphs showing the declining cost of some consumer good, like computers, or VHS players in the 90's, and that in time we'll see how it took $240 million to make Avatar in 2009, but by 2029 or 2039, that kind of production could be available to most people with a computer.

Expand full comment

Homework produced by and graded by LLMs. What could possibly go wrong?

Expand full comment
founding

Arnold

Interesting to use this to analyze the “Sermon on the Mount”.

Thanks

Expand full comment

Grading and feedback are certainly big aspects of teaching that a good LLM can help with.

In Slovakia, most BS & BA & PhD testing is done with oral exams. My professor wife is not happy about the idea, but in the near future the written work of students will be converted by LLM into a set of questions that the author of the paper would know. Directly in the work or the references, as well as reasonable speculations. The examiners will then be asking the questions and grading the answers, or letting the LLM grade.

I’m slowly going thru Quantum Country, the ml q&a online book by Andy Matushak; machine learning assisted, not yet ai (I think). An interesting step away from current linear books. But not yet too close An Illustrated Primer, just as no LLM is yet talking like HAL 9000 (not 2000). Written chat is not quite yet the superpower of talking back and forth, but a huge step, maybe 80%.

Managers using internal LLM assistants will soon be far more productive than those which don’t, which will a big increase in the searching for good tutorials on how to use llm.

Expand full comment

In War and Peace, Tolstoy wrote of “the impossibility of changing a man’s convictions by words.” Regrettably, I find this true of myself. Dr. Kling’s optimistic “we should see some pretty spectacular changes” does not shift my profound pessimism about the future of the United States.

I think I would be willing to bet that whatever spectacular changes we do see, they will not include any positive changes in areas subject to the 1st Estate establishment control nor will they include improvements in the quality of life for the 3rd Estate or the burden of government that they bear.

Some possible bets to consider:

(1) (a) The latest GDP per hour worked figures reported by the OECD (https://data.oecd.org/lprdty/gdp-per-hour-worked.htm) on December 9, 2025 will show that the current USA figure of $107.16 will not have grown to 112 or higher, and (b) the USA will continue to lag Hungary (current figure $116.57) by at least $5.)

(2)(a) The USA PISA math score (465 for 2022 reported results) (https://www.oecd.org/pisa/OECD_2022_PISA_Results_Comparing%20countries%E2%80%99%20and%20economies%E2%80%99%20performance%20in%20mathematics.pdf ) score will not improve to 475 or better for the 2025 test’s results. (b) The UK (2022 reported math score 489) will continue to exceed the USA math score by at least 15 points on the 2025 math test.

(3) In the 2025 President’s Budget, table 4-1 in the Historical Tables volume will show that not one federal department (rows 6-20) will show a decrease in their 2028 outlays estimate relative to the 2023 table 4-1.

(4) In their 2025 GPRA Budget performance plans, not one federal department will plan or attempt to achieve cost savings of 5% or more of their annual outlays through implementation of AI or LLMs.

(5) According to the Bureau of Labor Statistics, the average hourly cost of employee compensation for civilian workers in the US was $43.26 in June 2023. This includes both wages and salaries, which accounted for $29.86 per hour worked, and benefit costs, which accounted for $13.39 per hour worked.

The total employer compensation costs for private industry workers averaged $41.03 per hour worked in June 2023. Wages and salaries accounted for 70.6% of employer costs, while benefit costs accounted for the remaining 29.4%.

For state and local government workers, the total employer compensation costs averaged $58.25 per hour worked in June 2023. Wages and salaries accounted for 61.6% of employer costs, while benefit costs accounted for the remaining 38.4%.

So the looking at the gap between the state and local first estate and the private industry third estate ($58.25 – 41.03 = $17.22, or about 35% more), (a) I might bet that the gap persists and in June 2025 is no less than $16, and 30%. (b) 1st estate compensation will persist in the form of unfunded benefit cost commitments and not decline below 35%.

Of course I hope I am wrong, but I just don’t see LLMs adding to quality of life or increased liberty and autonomy or decreased social regimentation and control. The irrationality of 1st estate group think and regulatory mania simply outmatches the benefits of any LLM superpower.

Expand full comment

Imagine a quiver of robots that tend a vegetable garden or a farm. Each of these robots has specialized tools and means of moving about the garden. One might fly like a drone; another might roll like tank; another might crawl like a spider. One might pull weeds; another might remove or zap pests; another might water plants or apply fertilizer.

Pulling weeds in a vegetable garden is an extremely difficult task for a robot. Using herbicides and pesticides is much easier, but think about the environmental benefits of using robots. No more harmful chemicals. Robots can be taught which plants are weeds and which are food; which bugs are pests and which are ladybugs and honeybees. Each of these robots would need a camera, ideally multi or hyper spectral to help identify weeds and pests. An LLM is essential for this application because every vegetable garden and farm is different. Variations include shape, topography, obstacles, weed type, food plant type (grapes vs carrots vs apple trees), and pest type. And look at the upsides for the farmer and farm workers. More time to read and learn; less time doing mindless work. Politics should improve. Poverty will be reduced. Life expectancy will increase.

Likewise, mowing the lawn with gas-powered mowers pushed by humans will become a thing of the past. Each homeowner will program with a LLM his or her electric mower. This means mom and dad spending more time with kids, less inhalation of combustion exhaust fumes, and more time reading and learning. It also means quieter neighborhoods.

Likewise imagine the benefits of LLM robots to clean house, cook dinner, fold laundry, get groceries.

Classrooms will change too. Imagine a classroom of 20 students, one teacher and a number of robot teaching aids. One robot will lead a group of 4 students in a Socratic dialogue about minimum wage. Each student will have ample time to participate in the dialogue receiving answers to his or her own specific questions, gaining practice speaking, questioning and being respectful.

Makes me think I should get to work building robots.

It’s not all positive though. Might robots be used to sabotage, murder, indoctrinate, harass, injure, steal, and humiliate? Yes, but overall things will get bette.

Expand full comment

"What if a student uses an AI to write an essay? ...if the student... reads the essay and also reads the feedback it receives from an AI grader, the student can learn something."

What if a student cheats and plagiarizes? While in graduate school, I had a student in a 200-level course who plagiarized. It was obvious from the content, so I failed him on the term paper. Now, I'm guessing that A.I. can detect plagiarizing in seconds and even identify the source material. Will make plagiarizing almost impossible!

Expand full comment

A danger of AI. Creation of a new "AI Astrology" that is very believable, but totally false.

With AI being based upon neural networks like animal brains including human brains, they are very powerful pattern matching devices. The human brain looking at the random pattern of lights at night (a true 3-D pattern viewed in a 2-D slice) created a complete set of meaningless correlations with human events and human behaviors in all societies around the world we now call Astrology.

As the number of possible hypothesis for pattern matching correlation analysis is almost infinite it is hard to separate valid relationships from nonsense in a complex neural network ( the 2 sigma used by social science is clearly inappropriate and even the 5 sigma often used by the real sciences -- like with the Higgs particle -- is questionable).

I had someone give me a search via AI for a very specific highly technical issue and it generated two references with abstracts that were spot on. I was familiar with the authors and the journals, so everything looked OK with the abstracts proving a technical point in a legal case. Doing a lot of peer review in this area provided my access to the relevant journals so a client asked me to get copies of the journal article for the details. However, neither article existed and I even checked the Russian Sci-Hub for the reference (they have most of the real scientific literature of the world).

Getting built in scientific and critical thinking in AI may be as hard as getting human to think scientifically and critically. In academia, humans in the soft areas have already changed the meaning of critical thinking into a form of Astrology that ignores relevant variables like culture in their pattern matching.

Expand full comment

Assign ChatGPT to proofread your Substack posts. A brilliant writer should at least get the first word right. You wrote, "The quickly understand what you want." I think you mean, "THEY quickly understand what you want."

Expand full comment

I am, as always I know, skeptical of the move from LLMs grading to the real world. One advantage the grading and other in computer tasks have is that they allow for “machine intensive debugging”, that is, you run the program, see the results, tweak the program, run again, etc. until all the problems are solved. That works well if your machine is fast and cheap, but not so well if it is slow or expensive. You no doubt remember clearly the pain of submitting a punch card program with errors and not finding out till the next day, let alone being able to fix it.

When it comes to expense, that will be an interesting problem too, as if the process you are trying to iteratively get the machine to understand costs a grand in materials every time you try, and it takes a lot of tries, that’s a problem. The key there will be whether you can figure out the right prompt and then set it and forget it in the factory, because every situation is similar enough to not require tweaks, or if situations are different enough that unique tweaks to code will remain and remain expensive.

I agree however that LLM are a big step forward in getting the machine to understand generally what you want, and lowers the investment in person-machine translation that it takes to use the machine significantly.

Expand full comment
founding

Handle

Basic, fundamental premises are not compatible.

Especially ontological ideas. For example. . .

Are humans chemical robots (Descartes), smarter animals (Darwin), or image of God (Moses)?

No real way to combine.

Thanks

Clay

Expand full comment

Demand for immigrant labor will fall, even in places experiencing "fertility crises".

Expand full comment