10 Comments

I found this introduction to RLHF easy for my non-technical brain to understand. That may mean it keeps black-box status because it doesn't delve into the little functional elements of a RLHF system. Interesting nonetheless.

https://www.surgehq.ai/blog/introduction-to-reinforcement-learning-with-human-feedback-rlhf-series-part-1

Expand full comment
author

Good pointer. I like the blog: https://www.surgehq.ai/blog

Expand full comment

From where did the transcript come? I see timestamps but not Speaker changes, which is something I like about otter.ai (but the free version only allows 30 min. max).

While there were questions about RLHF, which is somewhat intuitively understandable, I had unasked questions about what a Transformer is. Here's a good note on that (from the bdtechtalks.com series tipped by Kevin Dick)

https://bdtechtalks.com/2022/05/02/what-is-the-transformer/

Arnold - it might be good to create an ai / chatbot sub-blog to collect your thoughts about this and references.

Kevin (thanks for comments!) - is there any good current overview guide to key ideas and where to go for the next level? Of course, asking ChatGPT is an option, too.

Expand full comment
author

the transcript came from the ChatGPT extension to Chrome that does transcripts

Expand full comment

This is a 3 part series on how they trained OpenAI to be woke, written for non-experts:

https://cactus.substack.com/p/openais-woke-catechism-part-1

"OpenAI's Woke Catechism (Part 1)

How a Few Activists Made ChatGPT Deny Basic Science"

https://cactus.substack.com/p/why-its-easy-to-brainwash-chatgpt

"Why it’s easy to Brainwash ChatGPT (AI series, Part 2)

Correcting Intuitions about Machine “Learning”"

https://cactus.substack.com/p/the-new-hippocratic-oath

"The New Hippocratic Oath (AI Series Part 3)

AI Must Serve The User"

The author of it is concerned about what he calls AI Pluralism, ensuring that not all AIs are woke. It'd suggest its a huge concern Microsoft will be embedding OpenAI's tools in all their office products, and its likely Google will follow suit. Between the two of them: their tools are used to generate the vast majority of written content. Imagine if there were a woke AI steering all that content? Then of course they'll also be embedding these woke AIs in the tools people use to search the net and now the chatbot they ask questions to learn things.

There hasn't been much research on the topic yet, but an early paper just out:

https://arxiv.org/abs/2302.00560

" Co-Writing with Opinionated Language Models Affects Users' Views

...Using the opinionated language model affected the opinions expressed in participants' writing and shifted their opinions in the subsequent attitude survey. "

Many people didn't notice the AI was leading them. If people object too much, its likely they'll merely make the leading more subtle. Yes: in theory people could turn to other AIs: but thats like the theory that people who object to censorship could abandon the major social media platforms for ones that don't. Most people won't bother, or can't since their employer dictates the tools they use. Also of course: even if people notice it on certain topics, most people aren't experts on everything and rational ignorance will lead them to not bother searching for alternative views if its a topic they aren't concerned about and merely searched in passing or referred to it tangentially in something they wrote so they didn't need to check further.

Expand full comment

Another article on RLHF.

https://bdtechtalks.com/2023/01/16/what-is-rlhf/

RLHF is a subset of "fine tuning".

So your LLM is initially trained on a very large corpus of text that gives it basic language capabilities.

Fine tuning then takes this model and applies a "supervised" machine learning step. This just means you have a set of inputs and outputs where the outputs have been graded in some fashion. With RLHF, they've been graded by humans.

You then run the LLM through the grading system and have it adjust the weights in its model so that it produces output that receives a higher grade.

Form what I can tell, there's some black magic here in terms of "freezing" layers. LLMs have lots of layers. Lower layers tend to learn more fundamental elements. So you can consume fewer researchers and apparently get better output in some cases by only having some number of upper layers adjust in response to fine tuning.

Note: I have worked in AI at several points in my career, but I'm now pretty old, a bit out of date, and have only skimmed the LLM/transformer literature. So this is just my abstracted interpretation of what's going on.

Expand full comment

I've seen people post many case uses for ChatGPT in their fields, but I find myself unable to find any in my own field.

1) It can't retrieve basic data from my industry off the internet.

2) It can't do any kind of mathematical analysis.

3) It can't answer basic sales questions a broker would answer.

4) It would probably be worse then already existing chatbots in customer service for my industry.

I've so far failed to come up with anything it could do to help my industry. And even if I could improve its performance somewhat, 90% of people want to buy my product through a broker who does little more then read information off plan finder, but they still want to deal with a human in person. I don't think ChartGPT will change that.

I think the medical industry (and medical insurance) is going to be a hard nut to crack because there are so many laws that provide very harsh penalties for providing incorrect information to people. Human beings have the common sense to avoid these errors, but ChatGPT blunders into them.

It seems like its better made for output with lower accuracy needs and less severity per inaccuracy.

Expand full comment

It is a huge obstacle that a corporate system faces the liability risk for being wrong as well as the social risk for telling the truth. The solution is to make sure the system made available to the public is utterly bland.

Expand full comment

Some say fire.

Some say ice.

It's the silicon, stupid.

Expand full comment

In the context of programming, code generation templates have long existed. And on the Web there are countless examples to be copied. The challenge in automatic code generation is creating code for custom requirements and the challenge here is the creator understanding those requirements and accurately defining them.

The "intelligence" of AI would be shown in a tool that would accept a request and then prompt for clarity, or point out conflicts in the specification or suggest improvements. I'd like to see a competition between human coders and a non-coder instructing AI to write programs. Where would the differences in outcome be manifest?

Already we have reached a point where most of the code executed in a program is "common" - it is obtained from publicly available libraries. The programmer writes a few thousand lines of code to customize how hundreds of thousands of existing lines of code are executed.

In this realm, an AI system should prove to be the superior programmer - it can understand and make use of all existing code libraries and APIs better than a person. So the challenge will be in instructing what to code. But what if we have the AI system come up with its own ideas of what to code? This is where things get surreal, and dangerous.

Expand full comment