LLM links, 2/12
Ethan Mollick on use of LLMs for tasks; Peter Diamandis on your AI valet; Diamandis on your AI copilot; David Rozado on how LLMs become biased;
One of the tasks the AI completed, creating trendy interior designs, has always interested me, but is completely beyond my natural abilities. But with AI help, I can start to explore a new set of interests. Beyond freeing us from tedium, there is the fascinating possibility that AI can help us expand our own capabilities.
I still think that “finding out stuff” is a relatively low-grade use for LLMs. I have hopes for having them serve as partners in creativity and mentoring.
Here are a few others that may be coming soon: “JARVIS, please locate my kids, and show me a video of what they’re doing at the moment. And, do you happen to know if they’ve completed their math homework yet?” Or “JARVIS, did you happen to watch that last conversation I had with my friend Walter? It looks like something is really troubling him. Any clue or guess as to what it might be?”
That last point is fascinating.
As your digital assistant gains access to imagery and video, it will gain another critical skill: better understanding human emotions and subtle communications.
Like it or not, I think that emotional understanding is going to be a factor in how AI fits into our lives. Of course, we already are surrounded by media exploiting human emotion. The future would seem to promise more benefits along with more intrusion/manipulation.
In a subsequent post, Diamandis writes,
The first time I heard the term copilot with reference to AI was when I was having dinner with Reid Hoffman in late 2022. Hoffman is the Co-Founder and past Executive Chairman of LinkedIn, a partner at Greylock Partners and a Co-Founder of Inflection AI.
As I’ve mentioned previously, he predicts that every profession will have an AI copilot by 2028. Here’s what he told me over dinner: “No matter what profession you are, doctor, lawyer, or CEO a copilot will become something between useful and essential. These copilots will accelerate our creativity and take care of the mundane aspects of our jobs.”
I think it will go much further
We also show in the paper that LLMs are easily steerable into target locations of the political spectrum via supervised fine-tuning (SFT) requiring only modest compute and customized data, suggesting the critical role of SFT to imprint political preferences onto LLMs.
Pointer from Tyler Cowen. The point is that LLMs do not get their leftwing bias from scraping the Internet. They get it from humans during fine tuning. It makes sense that if the goal is to keep the chatbot from offending people, and the people who you use to decide what is offensive are on the left, you will get this result. The good news is that you don’t have to worry about the training data biasing the models. You just have to worry about the humans that get to decide what is offensive.
By the way, I don’t know if I ever linked to Rozado’s piece on trends in politically biased terms in academic literature, but if I didn’t, I should have.
"Pointer from Tyler Cowen. The point is that LLMs do not get their leftwing bias from scraping the Internet. They get it from humans during fine tuning."
I couldn't find anything like this at the link but it sounds like a hokey conspiracy story to me. Is there any evidence to back it up?
Has an organization emerged that tracks the ephemeral results from chatbots in the same manner that Robert Epstein research has tracked ephemeral results from Google searches?