Discover more from In My Tribe
People Who Should Learn to use ChatGPT
You are probably one of them; #ChatGPT in 2023 is poised the way the World Wide Web was in 1993
My views on ChatGPT are evolving. Here are my current thoughts.
As with Excel, ChatGPT can have a variety of uses, but taking advantage of it may require some upfront learning. With optimal prompting, ChatGPT can perform some tasks surprisingly well. But with sub-optimal prompting, the results can be useless.
Also, some Excel users are prone to sinking time into it trying to get it to perform tasks that are more appropriately handled by other software tools. As with Excel, knowing what ChatGPT should and should not be used for will not be immediately obvious to a novice.
This essay is not meant to be anything like an introduction to ChatGPT. In a sense, it is too early to do that. Most of the potential uses of the software have not been imagined, much less tried and tested.
I am just trying to encourage you to become an early adopter. If you are one of the first people in your organization, or one of the first people in your profession, to learn how to form prompts well and how to incorporate ChatGPT’s output into your work, then you will probably enjoy a productivity advantage for several years. If I am correct, then the laggards will lose out to the early adopters. You are better off eating than getting eaten.
What will reduce the advantage of early adopters will be the emergence of a training infrastructure around ChatGPT. That suggests that one can earn a nice living as part of that infrastructure.
Be wary of people who anthropomorphize ChatGPT. It is not a high-skilled immigrant coming for your job or a prodigal child soon to become the ultimate genius. It is software, with intrinsic limitations.
the fundamental idea of a generative-language-based AI system like ChatGPT just isn’t a good fit in situations where there are structured computational things to do. Put another way, it’d take “fixing” an almost infinite number of “bugs” to patch up what even an almost-infinitesimal corner of Wolfram|Alpha can achieve in its structured way.
Thanks to a reader for pointing me to Wolfram’s essay.
between the human-like nature of chat and the fact that written material is harder to immediately evaluate, many people tend to avoid explicit prompt construction in ChatGPT. But that is a mistake! More elaborate and specific prompts work better.
When you search Google, you may have to formulate your query carefully to get it to home in on what you want. ChatGPT is even more tricky to use.
Much like students learn Excel and calculators - and even how to use more advanced formulas and engineering calculators - how do we outfit students to make sure AI is working for them (and not the other way around)? Also, related to the next forecast, how do we equip the next generation to know what they can trust and how to evolve their own mental judgment as AI spits out answers?
He also writes,
I hope this new tech evolves education to be more about learning how to think. How to find answers. How to connect dots. How to express yourself creatively (and stand out, merchandise your ideas, galvanize support for unpopular views).
I asked ChatGPT to tell me how to play “Can’t Buy Me Love” on the guitar. It responded with one of it’s “hallucinations,” (an anthropomorphism that people have come up with to describe ChatGPT’s incorrect outputs that the user has to spot for himself.) It gave me an alignment of chords with lyrics that was clearly wrong.
YouTube is clearly better. ChatGPT ended up pointed me to a YouTube video, but that was another hallucination: when I clicked on the link, YouTube said that the video had been taken down. To the extent that most children want to learn “how to do” rather than investigate ideas, my guess is that if you want to turn those kids loose to learn on their own you’re better off pointing them to YouTube than to ChatGPT.
A machine-learning algorithm can get better at doing things by giving it more data and/or more feedback, but that learning process does not necessarily progress the way a human’s will. If ChatGPT can’t give a correct answer to a calculus problem, then feeding it more data is not going to help it “learn” calculus.
given [ChatGPT’s] dramatic—and unexpected—success, one might think that if one could just go on and “train a big enough network” one would be able to do absolutely anything with it. But it won’t work that way. Fundamental facts about computation—and notably the concept of computational irreducibility—make it clear it ultimately can’t.
Even with its limitations, ChatGPT and its relatives probably will have many valid use cases. But I think that it is premature to talk about jobs that will be replaced by ChatGPT, just as it was premature a few years ago to talk about jobs that will be replaced by self-driving cars.
The problem with self-driving cars is that the algorithms sometimes make mistakes that no decent human driver would make, so you need the ability for a human to take over. ChatGPT is in that state currently.
That said, I think that the use case for self-driving cars is much better right now than most people are willing to acknowledge. There are many reckless drivers who make mistakes that no self-driving car would make, and those reckless drivers cause most of the bad collisions. I would love to see a feature that allowed a self-driving car to take control away from such humans—the guy who drives drunk or the guy who goes 50 percent over the speed limit, for example. I’m not sure if we can agree on what the equivalent is of a reckless driver where we would want ChatGPT to take over for humans.
Substacks referenced in this essay: