Bing/ChatGPT links, 3/9
Ethan Mollick on how to use Bing; and on how to inspire AI creativity; Cleo Nardo on the Tyrone effect; and Ethan Mollick on productivity gains
TL;DR You should read everything
writes.But now lets ask Bing to use the internet: look up the writing styles of Ruth Reichl and Anthony Bourdain. Use what you have learned to improve the paragraph. Several fascinating things happen as a result. First, you can see Bing performs a web search (the check-mark at the top). Next, you will see it uses this search to provide annotations and sources, which are clickable. They don’t always go to the exactly correct source, but they usually do. Finally, you will notice the writing has changed a lot. The answer is more sophisticated and the text is actually interesting to read.
As usual, I recommend his entire post.
Also, another Ethan Mollick post.
I would encourage everyone to play and experiment, and share the results of what they learn. This is an exciting time were everyone is an explorer, and the opportunities to find fascinating behaviors and useful applications are vast.
In this post, he makes great use of what I call the Simulation Path.
The output of the LLM is initially a superposition of simulations…
GPT-4 is trained to be a good model of internet text, and (2) on the internet incorrect answers will also often follow questions. Recall that the internet doesn't just contain truths, it also contains common misconceptions, outdated information, lies, fiction, myths, jokes, memes, random strings, undeciphered logs, etc, etc.
Several people have noticed the following bizarre phenomenon:
The Waluigi Effect: After you train an LLM to satisfy a desirable property P, then it's easier to elicit the chatbot into satisfying the exact opposite of property P.
you must think of jailbreaking like this: the chatbot starts as a superposition of both the well-behaved simulacrum (luigi) and the badly-behaved simulacrum (waluigi). The user must interact with the chatbot in the way that badly-behaved simulacra are typically interacted with in fiction.
Pointer from Tyler Cowen, who in the past has been known to attribute posts to his evil twin, Tyrone. If you were to talk to the LLM the way that Tyler refers to Tyrone (“Such fallacies! Such absurdities!” “such sophistries and indignities”), the LLM would respond in Tyrone fashion.
AI can increase productivity for workers in fields where automation and economies of scale were previously very rare. These jobs often require more autonomy and encompass multiple types of tasks (teachers need to prep lessons, grade, write letters of recommendation, run classes, respond to parents, run after school programs, do administrative work, etc.). With the power to outsource the most annoying and time consuming parts of their jobs, workers in these industries are highly incentivized to adopt AI quickly, either to do less work or to be able to bill out more work themselves. It is a recipe for rapid adoption at the individual level.
He is not just blabbing. He cites research papers. Hastily assembled research, of course, but the productivity gains for individuals are very high.
I myself use ChatGPT several times a week. I can pick up something to read that is outside of my usual knowledge area, and I can plow through it, using ChatGPT to explain concepts with which I am not familiar. This increases my intellectual range.
Mollick makes the interesting point that we do not yet have any organizational capital supporting the use of ChatGPT. It is just individuals adopting it and taking advantage.
every worker should be spending time figuring out how to use these general-purpose tools to their advantage. They should be thinking about how to automate their job to remove the tedious and uncreative parts, and getting a sense for the disruption to come before the organizations they work for realize the full implications of AI. They may also want to consider what to do with the extra time they may be creating as a result of their experiments.
We are in the early days of AI but disruption is already happening. There’s no instruction manual. No one has answers yet. The key is to learn fast.
+1
Substacks referenced above:
@
@
@
Here is a use I have not seen discussed much. I am using ChatGPT to explain Spanish grammar rules and give examples for concepts I am studying. Its results surpass most of grammar texts.
Next, I will take my Spanish writing practice essays and ask it for corrections. Stay tuned.
I already used it to produce an essay I was assigned for homework but my professor immediately knew I did not write it. Of course, I confessed in advance.
This is all supplements the work I am doing with a twice a week online instructor based in Bogota using Preply which I highly recommend.
I think the ChatBots will soon take over lots of educational tasks.
Just a couple of days ago I read that great LessWrong post about the Waluigi effect, and tried to explain it to my wife - not too successfully. The focus on what NOT to do makes doing it more likely - not unlike God telling Adam to NOT eat the apple. I also thought of Tyler and Tyrone!
I've also been spending time reading more Ethan Mollick. Plus more tutorials on LLMs and ai.
Noah Smith's interview with futurist optimist Kevin Kelly notes that there will be competing ai chatBots, and none are at all close to AGI, despite creative hallucinations.
(I'm not convinced a good simulation of AGI, & consciousness, is so different from real AGI)
Everything humans now do "on computers" will be able to be done, in the near future, by Bots. The "elite overproduction" problem is about to get much, much worse. More college folks need to be more oriented towards owning their own business.
The productivity rapid increases will allow a faster reduction in bullshit jobs than the bullshitters can replace them.
On Gab, the Christian Nationalist parallel economy is moving slowly (/rapidly?) towards more ai, with ai art: https://gab.com/AI
I'm very interested in how effective they'll be. I think the real life Christian preacher hypocrites are examples of the human Waluigi effect, like the 2020 sex scandal of Jerry Falwell jr., but also the 40+ women Martin Luther King committed adultery with (that the FBI has tapes of).
To much focus against the abyss - and the abyss becomes part of you.
Part of this increases my fear of potential AGI, but also increases the desire to get more functional dumbsmartten ai bots to help with each task one desires to do.
(Yet not quite in writing up a comment).