My experience with creating an app to grade and provide constructive feedback on op-ed essays has been eye-opening. Before there were large language models, it would have taken me months of working all day to train a computer to do anything close to what I taught ChatGPT to do in a few hours. The ability of the LLM to understand and be understood without the user having to write computer code is a superpower.
Because you can use ordinary language with ChatGPT, and because you can correct your instructions using ordinary language, you can create all sorts of apps without having to write and debug computer code. You just tell it what you want, and then go back and forth with it until it gets the app working the way you would like. It is like having a human software developer you can talk to who turns your instructions into code almost instantly.
Imagine what will happen when these AIs are connected to physical tools. These physical tools might be custom robots or all-purpose robots. You could be a farmer who never has to go into the fields—just explain to the robots what you want. Or you could be a chef who gives directions without being in the kitchen. Or a scientist who conducts experiments without having to spend all day in the lab.
Instead of pure self-driving cars, we could have remote-driven cars. A few humans could sit in an office and direct a fleet of cars, using ordinary language. Self-driving cars today often encounter situations that leave them unsure what to do. With the remote-drive model, they could alert humans in an office, and the humans could suggest a remedy. “Oh, it looks like there is a tree down in the next block. Make a right turn here.”
One of the most unpleasant tasks of teaching is grading students’ work. It seems clear to me that ChatGPT is capable of doing this, and doing it at scale, providing students with more and better feedback than any teacher has time to give.
What if a student uses an AI to write an essay? I would say that if the student at least reads the essay and also reads the feedback it receives from an AI grader, the student can learn something.
The ability to talk back and forth with humans gives large language models a superpower. As this superpower gets used, we should see some pretty spectacular changes.
What was hard to define thus hard to automate thus a comparative advantage for humans was precisely that capacity to understand and work with humans. This is the last nail in the coffin of those careers.
"Well then I just have to ask why can't the customers take them directly to the software people?" - "Well, I'll tell you why, because, engineers are not good at dealing with customers." ... "What would you say you do here?" - "Well--well look. I already told you: I deal with the god damn customers so the engineers don't have to. I have people skills; I am good at dealing with people. Can't you understand that? What the hell is wrong with you people?"
I wonder how long it will take for a teenager to make a movie with the production value of Avatar or a comparable big budget film using just a home computer and some AI. I'm thinking of those graphs showing the declining cost of some consumer good, like computers, or VHS players in the 90's, and that in time we'll see how it took $240 million to make Avatar in 2009, but by 2029 or 2039, that kind of production could be available to most people with a computer.