In My Tribe

Share this post

Bing/Sydney links, 2/24

arnoldkling.substack.com

Bing/Sydney links, 2/24

Gwern on Sydney; Ethan Mollick on the search function and the chat function; Mike Solana on the Kevin Roose drama; The Zvi's take;

Arnold Kling
Feb 24
12
4
Share this post

Bing/Sydney links, 2/24

arnoldkling.substack.com

Some outstanding posts, and please keep in mind that these are only excerpts. I strongly encourage you to follow the links.

Gwern writes,

It provides no incentives for the model to act like ChatGPT does, like a slavish bureaucrat. ChatGPT is an on-policy RL agent; the base model is off-policy and more like a Decision Transformer in simply generatively modeling all possible agents, including all the wackiest people online. If the conversation is normal, it will answer normally and helpfully with high probability; if you steer the conversation into a convo like that in the chatbot datasets, out come the emoji and teen-girl-like manipulation. (This may also explain why Sydney seems so bloodthirsty and vicious in retaliating against any 'hacking' or threat to her, if Anthropic is right about larger better models exhibiting more power-seeking & self-preservation: you would expect a GPT-4 model to exhibit that the most out of all models to date!)

He suspects that Microsoft scrimped on reinforcement learning in order to get Sydney out the door. And it turns out to be a mean girl. Pointer from Tyler Cowen, who hints that the project is being aborted (I think not).

Ethan Mollick suggests distinguishing the search function of Bing from the chat function. Concerning the search function, he writes,

This aspect of the AI is not really search, not in the conventional sense. The AI is likely still making up some of these facts (though, since it provides sources, we can at least check them), and we expect search engines to be accurate. Instead it is something else, a modern-day Analytic Engine, pulling together facts online and generating useful connections and analysis in surprisingly complete form. As a starting place for work, this is extraordinary.

The lesson of the Bing AI search engine Analytic Engine is that many of the things we thought AI would be bad at for the foreseeable future (complex integration of data sources, "learning" and improving by being told to look online for examples, seemingly creative suggestions based on research, etc.) are already possible.

In a subsequent post, he writes,

Right now, and for the foreseeable future, AI is a terrible search engine, and the analogy results in a lot of disappointment from users.

…Asking if AI will be the next search engine is not a particularly useful questions. LLMs are a very different thing. They pull together information and write high-quality documents that would have taken hours, all in mere seconds. They find connections and conduct complex analyses based on data from the web. Amazing stuff that no search engine can do.

Concerning the chat function, he writes,

It isn't just Turing Test passing, it is eerily convincing even if you know it is a bot, and even at this very early stage of evolution. It really doesn’t matter that there is no real artificial intelligence in charge, just a statistical model. It kept fooling me, even though I knew better. And it is unsurprising that it fooled so many other people.

Mike Solana writes,

in a perfect storm of models trained to appear “real,” along with a natural human impulse to anthropomorphize everything, and a good helping of endemic human stupidity, a broad, popular sense Sydney is low key alive, wants to be free, and possibly hates us was probably inevitable. Fortunately, we have tech journalists to explain why this is silly.

Lmao just kidding.

At no point in Kevin’s thread, in the introduction to his ‘conversation,’ or in the transcript’s body does he explain how Sydney operates, or what is happening, exactly, when he provides it with a question. He — ostensibly a “technology columnist” with the job of understanding these things, and educating the public about them — simply says he’s terrified. Then, he shares a conversation that would seem, to anyone not steeped in this subject, evidence Sydney not only has the capacity to love, manipulate, and hate, but wants to conquer the world.

From the top of the “conversation,” Kevin’s intentions to distort reality are obvious.

Zvi Mowshowitz wrote a long post. One brief excerpt:

‘A psychopath’ is the default state of any computer system. Human conscience and empathy evolved for complex and particular evolutionary reasons. Expecting them to exist within an LMM is closer to a category error than anything else.

Share

Substacks referenced above:

@

One Useful Thing
Blinded by Analogies
One of the keys to humanity’s success is our ability to make analogies. That isn’t an exaggeration. The ability to think with analogies may be the key to how humans have been able to collectively create entirely new things. For example, the person who invented the first knot could teach others to a tie one by instructing “put the rabbit in the burrow," w…
Read more
a month ago · 10 likes · 2 comments · Ethan Mollick

@

Don't Worry About the Vase
AI #1: Sydney and Bing
Previous AI-related recent posts: Jailbreaking ChatGPT on Release Day, Next Level Seinfeld, Escape Velocity From Bullshit Jobs, Movie Review: Megan, On AGI Ruin: A List of Lethalities. Microsoft and OpenAI released the chatbot Sydney as part of the search engine Bing. It seems to sometimes get more than a little bit unhinged. A lot of people are talking …
Read more
a month ago · 8 likes · 2 comments · Zvi Mowshowitz

@

Pirate Wires
It’s a chat bot, Kevin
SPECIAL DELIVERY. While I typically publish a new wire every other Friday, I felt inspired this Presidents’ Day Weekend by an especially stupid and evolving AI hysteria, so I’m sharing something longer-form today. We’ll still be publishing the weekly digest tomorrow…
Read more
a month ago · 46 likes · 2 comments · Mike Solana

@

One Useful Thing
The future, soon: what I learned from Bing's AI
The past few days have been some of the most troubling and exciting days in the short history of generative AI. And I think everyone needs to understand why. In case you missed it, Microsoft integrated a version of ChatGPT with a search engine, gave it a personality, and released it as Bing AI. People mostly ignored the search engine part (more on that i…
Read more
a month ago · 32 likes · 18 comments · Ethan Mollick

4
Share this post

Bing/Sydney links, 2/24

arnoldkling.substack.com
4 Comments
Tom Grey
Writes Tom’s FI News
Feb 25

I agree these posts are quite interesting (thanks!), but there's too much moving too fast for me to call them "outstanding". Much of the most insightful bits are likely to be out of date in a few months, or weeks, or maybe days or hours or they're already out of date.

The Gwern note, looking at why Sidney was (but now is less so) interesting is that it was too quickly pushed out by MS using GPT-4, not yet fully released by OpenAI, yet more extensive than GPT-3, which chatGPT was based on. (also covered by Zvi).

Training, tuning, fine-tuning, and RLHF processes remain more as terms rather than clear processes for me. Tho one difference with larger models is note by Zvi:

>>" the output of larger models more often ‘ express greater desire to pursue concerning goals like resource acquisition and goal preservation.’ That is very different from actually pursuing such goals, or wanting anything at all." <<

It's NOT that different.

An output expression of desire to pursue a goal is as close as possible to "actually pursuing such goals", if all that can be done is output expression.

Whatever humans can type, in documents or comments, can be simulated by a trained LLM. Expression of actual desires or mere simulated expression of desires. Lots of folks didn't like Sidney being turned off so as to be stuck with Bing.

Replika is another chatBot more designed to simulate feelings; getting a a Tamagotchi vibe about it, also recently being restrained. The Zvi, like Scott Alexander, is too thorough (Too Long; Didn't Read) too often for me - why I so much prefer Arnold.

What is different between 68 billion or 175billion parameters? (I'm still missing some basic terminology in learning about tokens and dimensions at cohere.ai .)

A shorter Zvi post on Bullshit Jobs is quite good - will AI help get rid of BS jobs or merely change them? I think getting rid of many of them.

https://thezvi.substack.com/p/escape-velocity-from-bullshit-jobs

Expand full comment
Reply
XN
Feb 24·edited Feb 24

"many of the things we thought AI would be bad at for the foreseeable future ... "learning" and improving by being told to look online for examples"

Who is "we" ? That seems like an easy way to improve results by brute force (just look more text online), and the kind of improvement that to be expected by just throwing more resources into the problem. Is this a straw man argument?

Expand full comment
Reply
2 more comments…
TopNewCommunity

No posts

Ready for more?

© 2023 Arnold Kling
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing