LLM Links, 12/1
Mustafa Suleyman on LLMs that remember; Tim B. Lee on what AI cannot do; Alice Evans on grading essays that might be written by an LLM; Rob Brooks on virtual friends
memory is the critical piece because today every time you go to your AI you know you have a new session and it has a little bit of memory for what you talked about last time or maybe the time before but because it doesn't remember the session five times ago or 10 times ago it's quite a frustrating experience to people because you don't go and invest deeply and really share a lot and really uh you know look to build on what you've talked about previously because you know it's going to forget so you sort of tap out after a while and it turns you know into a shallower experience but we have prototypes that we've been working on that have near infinite memory and so it just doesn't forget which is truly transformative
Pointer from Rowan Cheung I think he has a point. A lot of my disappointment with using LLMs to try to clone myself would probably go away if I could have conversations with them that they remembered. They would be more trainable that way.
People don’t just value other people for their ability to carry out specific tasks; they value characteristics that make us human—including our uniqueness, vulnerability, independence, and connections to other human beings. Artificial systems are unlikely to gain these characteristics no matter how rapidly machine learning and robotics progress.
I react skeptically to Lee’s essay. See one of the quotes from Rob Brooks, below.
For future years, I propose a combined essay and oral assessment. Students have 3 hours to write 2 out of 6 essay questions, followed by a video interview to explain their work. Not complex. Just a straightforward 10-minute conversation where they explain their argument, discuss their key references, and demonstrate their understanding.
This sounds right to me. Note that it does not matter whether the student “cheats” by having an LLM write the essay. The ten-minute oral interview will surface the student’s level of understanding.
The key point is to work with technology, not against it.
I can see both upsides and downsides to the fast-expanding universe of virtual friends. On the positive side, many people are lonely or isolated, undernourished for human conversation. Some just need a dependable ear to talk to. On the downside, virtual friends might monopolise time and headspace that could better be used tending our relationships with family and close friends. Social media has already spread users thin across a vast tidal flat of superficial contacts, often at the cost of sleep and relationships. AI chatbots could prove the next, far more devastating wave.
My favorite line in Rob Brooks’ essay:
Whenever somebody points out something humans can do but technology cannot, I set my watch and wait.
He concludes,
While there is good empirical evidence that virtual friends help some people flourish, especially when lonely or bereaved, they currently don’t deliver for me. I think that’s because they are incapable of surprise, and programmed not to disappoint, but I believe those are design shortcomings that can and will be overcome.
As of now, I would say that the “not quite good enough” property of many chatbot applications is troubling.
substacks referenced above:
@
@
"The ten-minute oral interview will surface the student’s level of understanding."
Within a very short time these interviews will have to be in a room face to face to be sure you are talking to the student.
Thanks very much for sharing that idea from Evans. I'll definitely use it in future semesters.
That said, at least in my field (philosophy) it does matter a lot whether a student came up with the idea for their paper, vs if they had that idea fed to them by Chat GPT and then came to an understanding of it. It's a very different, and harder, display of logical skills to come up with your own argument.