GPT/LLM links
Tim Lee on how LLM's work; Anders L on AI as a matchmaking app; Martin Casado and Sarah Wang on the economics of potential applications; Robin Hanson on AI's as descendants
Timothy B. Lee and Sean Trott write,
To understand how language models work, you first need to understand how they represent words. Human beings represent English words with a sequence of letters, like C-A-T for cat. Language models use a long list of numbers called a word vector.
…For example, the words closest to cat in vector space include dog, kitten, and pet. A key advantage of representing words with vectors of real numbers (as opposed to a string of letters, like “C-A-T”) is that numbers enable operations that letters don’t.
Words are too complex to represent in only two dimensions, so language models use vector spaces with hundreds or even thousands of dimensions. The human mind can’t envision a space with that many dimensions, but computers are perfectly capable of reasoning about them and producing useful results.
Most people using dating apps are there to talk to other people. And while AI has its limitations it is actually good at talking to people. If users of dating apps want someone to talk to, let them talk to the AI.
I think that this idea holds promise. An app like Pi is good at getting to know a person. I imagine that Pi could be a very good matchmaker if a lot of people were using it.
Are people happy with dating apps now? I don’t think so. If I were starting an app using an LLM, I would have the AI ask you to describe some of your friends. The AI would get into an extended conversation about that, in the process learning something about your interests and values. Then, going back to some data that the AI would have gathered on conversations with happy couples vs. random people, the AI would find correlations and suggest that people meet.
I’m think that the target market would be young professionals interested in making new friends and in long-term relationships. I would not be using AI matchmaking to try to compete with Tinder.
Martin Casado and Sarah Wang (a16z) write,
Many of the use cases for generative AI are not within domains that have a formal notion of correctness. In fact, the two most common use cases currently are creative generation of content (images, stories, etc.) and companionship (virtual friend, coworker, brainstorming partner, etc.). In these contexts, being correct simply means “appealing to or engaging the user.” Further, other popular use cases, like helping developers write software through code generation, tend to be iterative, wherein the user is effectively the human in the loop also providing the feedback to improve the answers generated. They can guide the model toward the answer they’re seeking, rather than requiring the company to shoulder a pool of humans to ensure immediate correctness.
What this means is that LLM companies do not have to hire armies of humans to take the model the “last mile” to perfect accuracy. I think that this fact is not well understood by those who see LLMs in terms of the mistakes that they make.
Pointer from Tyler Cowen.
future human-level AIs are not co-existing competing aliens; they are instead literally our descendants. So if your evolved instincts tell you to fight your descendants due to their strangeness, that is a huge evolutionary mistake. Natural selection just does not approve of your favoring your generation over future generations. Natural selection in general favors instincts that tell you to favor your descendants, even those who differ greatly from you.
Sounds persuasive to Robin, I guess.
Substacks referenced above:
@
@
Sounds like Robin is invoking Natural Selection as a God that declares the highest good rather than an objective process of adaptation.
On Timothy B Lee's comments, I think with many classical computing theorists they tend to assume the brain is only capable of following behaviour x, similar to a classical computer, or not. However, I think we will soon see with the invention of fault tolerant quantum computers, the brain has both classical and quantum properties (similar to most things in nature).
Specifically, those with memory disorders often display vector like retention - looking at a cat and saying "dog" or looking at a fork and saying "grab your spoon". They have the class of a noun organised, but accuracy is slightly off due to something that has happened through synaptic connections (i.e. imagine a unfinished LLM that is still rearranging the connections between parameters). So I would say that we are going to find over time and with greater understanding of the brain that we probably have both capabilities - we organise words using letters, but we group words in a vector space similar to the structure that LLMs use.