I agree these posts are quite interesting (thanks!), but there's too much moving too fast for me to call them "outstanding". Much of the most insightful bits are likely to be out of date in a few months, or weeks, or maybe days or hours or they're already out of date.
The Gwern note, looking at why Sidney was (but now is less so) interesting is that it was too quickly pushed out by MS using GPT-4, not yet fully released by OpenAI, yet more extensive than GPT-3, which chatGPT was based on. (also covered by Zvi).
Training, tuning, fine-tuning, and RLHF processes remain more as terms rather than clear processes for me. Tho one difference with larger models is note by Zvi:
>>" the output of larger models more often ‘ express greater desire to pursue concerning goals like resource acquisition and goal preservation.’ That is very different from actually pursuing such goals, or wanting anything at all." <<
It's NOT that different.
An output expression of desire to pursue a goal is as close as possible to "actually pursuing such goals", if all that can be done is output expression.
Whatever humans can type, in documents or comments, can be simulated by a trained LLM. Expression of actual desires or mere simulated expression of desires. Lots of folks didn't like Sidney being turned off so as to be stuck with Bing.
Replika is another chatBot more designed to simulate feelings; getting a a Tamagotchi vibe about it, also recently being restrained. The Zvi, like Scott Alexander, is too thorough (Too Long; Didn't Read) too often for me - why I so much prefer Arnold.
What is different between 68 billion or 175billion parameters? (I'm still missing some basic terminology in learning about tokens and dimensions at cohere.ai .)
A shorter Zvi post on Bullshit Jobs is quite good - will AI help get rid of BS jobs or merely change them? I think getting rid of many of them.
"many of the things we thought AI would be bad at for the foreseeable future ... "learning" and improving by being told to look online for examples"
Who is "we" ? That seems like an easy way to improve results by brute force (just look more text online), and the kind of improvement that to be expected by just throwing more resources into the problem. Is this a straw man argument?
I agree these posts are quite interesting (thanks!), but there's too much moving too fast for me to call them "outstanding". Much of the most insightful bits are likely to be out of date in a few months, or weeks, or maybe days or hours or they're already out of date.
The Gwern note, looking at why Sidney was (but now is less so) interesting is that it was too quickly pushed out by MS using GPT-4, not yet fully released by OpenAI, yet more extensive than GPT-3, which chatGPT was based on. (also covered by Zvi).
Training, tuning, fine-tuning, and RLHF processes remain more as terms rather than clear processes for me. Tho one difference with larger models is note by Zvi:
>>" the output of larger models more often ‘ express greater desire to pursue concerning goals like resource acquisition and goal preservation.’ That is very different from actually pursuing such goals, or wanting anything at all." <<
It's NOT that different.
An output expression of desire to pursue a goal is as close as possible to "actually pursuing such goals", if all that can be done is output expression.
Whatever humans can type, in documents or comments, can be simulated by a trained LLM. Expression of actual desires or mere simulated expression of desires. Lots of folks didn't like Sidney being turned off so as to be stuck with Bing.
Replika is another chatBot more designed to simulate feelings; getting a a Tamagotchi vibe about it, also recently being restrained. The Zvi, like Scott Alexander, is too thorough (Too Long; Didn't Read) too often for me - why I so much prefer Arnold.
What is different between 68 billion or 175billion parameters? (I'm still missing some basic terminology in learning about tokens and dimensions at cohere.ai .)
A shorter Zvi post on Bullshit Jobs is quite good - will AI help get rid of BS jobs or merely change them? I think getting rid of many of them.
https://thezvi.substack.com/p/escape-velocity-from-bullshit-jobs
"many of the things we thought AI would be bad at for the foreseeable future ... "learning" and improving by being told to look online for examples"
Who is "we" ? That seems like an easy way to improve results by brute force (just look more text online), and the kind of improvement that to be expected by just throwing more resources into the problem. Is this a straw man argument?
Ethan M. seems to not have written anything :) Typo perhaps?
I moved it up to earlier in the post