ChatGPT/LLM links, 3/28
Sam Hammond on linguistic theory; The Zvi with two long posts; Bryan Caplan made a bad bet; Ethan Mollick saves time
I see the success of LLMs as vindicating the use theory of meaning, especially when contrasted with the failure of symbolic approaches to natural language processing. Transformer architectures were the critical breakthrough precisely because they provided an attention mechanism for conditioning on the contexts of an input sequence, allowing a model to infer meaning from full sentences instead of one word at a time.
An interesting take, although by using phrases like “the linguistic turn” it sounds like Hammond spent too much time around the philosophy department faculty lounge.
To me a key question is, are we willing to slow down (and raise costs) by a substantial factor to mitigate these overreliance, error and hallucination risks? If so, I am highly optimistic. I for one will be happy to pay extra, especially as costs continue to drop. If we are not willing, it’s going to get ugly out there.
Later:
A guide by Ethan Mollick on how teachers can use AI to help them do a better job teaching, as opposed to ways that students can interact with AI to help them learn better. Looks handy and useful, although I still expect that the students using AI directly is where the bulk of the value lies.
That is where I would place my bets, also.
The post strikes me as too much for anyone to read. Think of it as Zvi taking notes on his massive reading. It’s like one of my own “links” posts, on steroids. Which is a legitimate use of a blog post, but be forewarned.
Zvi’s other long post is on the risks of GPT-4 plug-ins.
Without going too far off track, quite a lot of AI plug-ins and offerings lately are following the Bard and Copilot idea of ‘share all your info with the AI so I have the necessary context’ and often also ‘share all your permissions with the AI so I can execute on my own.’
I have no idea how we can be in position to trust that. We are clearly not going to be thinking all of this through.
I publicly bet Matthew Barnett that no AI would be able to get A’s on 5 out of 6 of my exams by January of 2029. Three months have passed since then. GPT-4 has been released. Bet On It reader Collin Gray has kindly used GPT-4 to re-run the same test.
To my surprise and no small dismay, GPT-4 got an A. It earned 73/100, which would have been the fourth-highest score on the test.
It is important to understand that it can take a long time for a computer’s ability at a task to go from 0 to x, but then take very little time to go from x to 10x. That is why some of us are very excited about the chatbots.
I decided to run an experiment. I gave myself 30 minutes, and tried to accomplish as much as I could during that time on a single business project. At the end of 30 minutes I would stop. The project: to market the launch a new educational game. AI would do all the work, I would just offer directions.
And what it accomplished was superhuman. I will go through the details in a moment, but, in 30 minutes it: did market research, created a positioning document, wrote an email campaign, created a website, created a logo and “hero shot” graphic, made a social media campaign for multiple platforms, and scripted and created a video. In 30 minutes.
Substacks referenced above:
@
@
@
@
“It is important to understand that it can take a long time for a computer’s ability at a task to go from 0 to x, but then take very little time to go from x to 10x.”
I spent a few weeks focusing on non-coding work and when I came back to code again, GPT was noticeably much better at it. This is already a major labor saver for people who take advantage.
I managed the entire Zvi article, but took about four or five sittings to get to the end!
I am teaching a new module after Easter and am trying to collate as much information as possible to use AI effectively and encourage students to do the same. Exciting times!