Recent Links Related to ChatGPT, 2/16
Ethan Mollick on imaginative uses; Mollick on simulations; Infovores and Dr. Hammer on mentorship; Anton Korinek provides a useful background paper. Stephen Wolfram on the simplicity of language
these new tools, which are trained on vast swathes of humanity’s cultural heritage, can often best be wielded by people who have a knowledge of that heritage. To get the AI to do unique things, you need to understand parts of culture more deeply than everyone else using the same AI systems. So now, in many ways, humanities majors can produce some of the most interesting “code.”
People want to use ChatGPT to look up information. It might be good for that. But I think that it is at least as important as a simulation tool. Many of the examples that Ethan gives fall within the category that I call simulations.
Remember the Keynes-Hayek rap video? It is a simulation, created “by hand” from a script, using actors. The process took many months. Today, one could make a version quickly using AI. It would not be as good, of course. But getting most of the tedious work done by an AI could still enable humans to produce a good version much faster.
In a different post, Mollick writes,
With just a photograph and 60 seconds of audio, you can now create a deepfake of yourself in just a matter of minutes by combining a few cheap AI tools. I've tried it myself, and the results are mind-blowing, even if they're not completely convincing. Just a few months ago, this was impossible. Now, it's a reality.
As you know, I have been emphasizing simulation as a key use and abuse case of the new AI.
My ideal AI mentorship future would fill gaps in the status quo by providing helpful instruction and encouragement for those who would otherwise receive very little of it. In this model GPT is like Direct Instruction on steroids, achieving remarkable results by leveraging simple scripts that scale while continuously improving to fit your unique learning style with the mountains of user feedback it collects.
I asked ChatGPT to describe the teacher’s role in Direct Instruction. It responded, in part,
In Direct Instruction, the teacher plays a highly structured and directive role, acting as a facilitator of learning rather than a facilitator of discovery. The teacher is responsible for presenting the material in a clear, step-by-step manner, and for ensuring that students understand and can apply the concepts being taught.
You can find more about DI using ChatGPT or Google. There is evidence that it works. It feels somewhat dehumanizing for teachers. ChatGPT would be very good at it, because it involves a form of teaching that is highly scripted, and ChatGPT would not object to being dehumanized.
Open ended mentoring generally does not have a known end state, and the ranges of end states and possible methods of getting there are so broad that there is no possible way that there exists a corpus of knowledge to refer to.
I agree that teaching a particular skill is a different form of mentorship than helping one to decide how to live one’s life. I tend to think that the latter is more a matter of showing by example than giving instruction.
Over the past decade, the training compute of top-end deep learning models has doubled on average every six months, implying a thousand-fold increase every five years(Sevilla et al., 2022). This trend is also behind the rapid rise in the capabilities of LLMs and other foundation models in recent years. …most of the useful capabilities for researchers that we document below have emerged only in recent years.
He points out that one can ask ChatGPT to provide counter-arguments to a point that you are making. Note that one way to identify a MidWit (see my comment) is to notice counter-arguments that the person has missed.
In fact, it is plausible to me that one could adapt ChatGPT to do scoring for Fantasy Intellectual Teams. It could find: Devil’s Advocate questions asked by podcasters; instances of someone thinking in Bets, instances of people stating Caveats; examples of fair Debate; examples of people displaying an Open mind by articulating reasons for (possibly) changing their minds; examples of people evaluating Research, not just citing supporting papers; and examples of Steel-manning.
He points out that ChatGPT is good at summarizing. As Dennis Pearce pointed out in our Zoom discussion a couple weeks ago, ChatGPT can become an intermediary between a writer and a reader, the way that search engines intermediate between users and web sites.
Korinek notes that ChatGPT can not only write code but it can read code and explain in English what it does.
Interesting throughout. Pointer from Tyler Cowen.
Stephen Wolfram writes,
So how is it, then, that something like ChatGPT can get as far as it does with language? The basic answer, I think, is that language is at a fundamental level somehow simpler than it seems. And this means that ChatGPT—even with its ultimately straightforward neural net structure—is successfully able to “capture the essence” of human language and the thinking behind it. And moreover, in its training, ChatGPT has somehow “implicitly discovered” whatever regularities in language (and thinking) make this possible.
Another pointer from Tyler.
Substacks referenced above:
@
@
@
@
Re: "ChatGPT—even with its ultimately straightforward neural net structure—is successfully able to “capture the essence” of human language and the thinking behind it."—Stephen Wolfram
Compare Michael Huemer:
https://fakenous.substack.com/p/are-we-on-the-verge-of-true-ai
Here are exceprts, re: ChatGPT and language:
"[ChatGPT] does not have program elements that are designed to represent the meanings of any of the words. [...]
What it shows is that (perhaps surprisingly), it is possible to produce text similar to that of a person with understanding, using a completely different method from the person. The person would rely on their knowledge of the subject matter that the words are about; ChatGPT does a huge mathematical calculation based on word statistics in a huge database. [...]
[...] the Turing Test is a bad test for awareness. [...]
[...] every time a human tester finds a question that ChatGPT answers wrongly, revealing its lack of understanding, they can modify the program to make it answer that question correctly.
Eventually, people will run out of ways of uncovering the thing’s lack of understanding, and the program will be able to fool people. But it will remain the case that what the chatbot is doing is completely different from what an actual person with understanding does."
"As you know, I have been emphasizing simulation as a key use and abuse case of the new AI."
Have you heard of the idea Heaven Banning?
"heavenbanning, the hypothetical practice of banishing a user from a platform by causing everyone that they speak with to be replaced by AI models that constantly agree and praise them, but only from their own perspective, is entirely feasible with the current state of AI/LLMs" - https://twitter.com/nearcyan/status/1532076277947330561