Discover more from In My Tribe
GPT/LLM links 5/23
Sam Hammond wants to go big on alignment research; Congress wants to regulate; Ethan Mollick on LLMs as interns; Gary Marcus on AI threats
The basic philosophy behind a Manhattan Project for AI safety is proactive rather than reactive. The cost of training large models will continue to plummet as data centers running next generation GPUs and TPUs come online. Absent an implausible, world-wide crackdown on AI research, AGI is coming whether we like it or not. Given this reality, we need to be proactively accelerating alignment and interpretability research. While the future may be hidden behind a dense fog, we could at least turn on our high-beams.
Oy. Sam, have you met our 21st-century government? The only Manhattan Project I want to see is a Manhattan Project to figure out how to get people to stop calling for Manhattan Projects.
Timothy B. Lee reports on the AI hearing where Sam Altman and others testified. Lee concludes.
It might make more sense to wait a year or two and see how AI technology evolves before passing a major bill to regulate AI.
In the meantime, I think the best thing Congress could do is to fund efforts to better understand the potential harms from AI. Earlier this month, the National Science Foundation announced the creation of seven new National Artificial Intelligence Research Institutes focused on issues like trustworthy AI and cybersecurity. Putting more money into initiatives like this could be money well spent.
I’d also love to see Congress create an agency to investigate cybersecurity vulnerabilities in real-world systems. It could work something like the National Transportation Safety Board, the federal agency that investigates plane crashes, train derailments, and the like. A new cybersecurity agency could investigate whether the operators of power plants, pipelines, military drones, and self-driving cars are taking appropriate precautions against hackers.
Lee is suggesting things that agencies could do that have a high probability of making us better off and a low probability of making us worse off. My guess is that highly visible legislative action on AI would have the opposite characteristic.
giving the AI context and constraints makes a big difference. So you are going to tell it, at the start of any conversation, who it is: You are an expert at generating quizzes for 8th graders, you are a marketing writer who writes engaging copy for social media campaigns, you are a strategy consultant. This will not magically make the AI an expert in these these things, but it will tell it the role it should be playing and the context and audience to engage with.
He wisely writes,
The AI you are using is the worst and least capable AI you will ever use.
I think a lot of the improvements will come not so much from having a better LLM but from having apps built on top of the LLM. For example, it would not save me much time to summarize Ethan’s latest post, Scott Alexander’s latest post, and so on, one by one. But having an LLM that can summarize all of the recent posts from every substack I subscribe to—that would be terrific time-saver.
We need to have the reasoning capacity and the ability to represent explicit information of symbolic AI. And we need the learning from lots of data that we get from neural networks. Nobody's really figured out how to combine the two—I think, in part, because there's been almost a holy war in the field between the people following these two approaches. There's a lot of bitterness on both sides. If we're going to get anywhere, we're going to need to build some kind of reconciliation between these two approaches.
I would add that we may need more than just those two approaches. I do think that if we just use the neural networks of LLMs, we will asymptote way short of general intelligence.
Overall, I was impressed by this interview. Mounk asks good clarifying questions, and Marcus comes off well, in my view.
Substacks referenced above: