In My Tribe

Share this post

GPT/LLM links 5/23

arnoldkling.substack.com

Discover more from In My Tribe

improving social epistemology, rewarding reasoned discourse
Over 4,000 subscribers
Continue reading
Sign in

GPT/LLM links 5/23

Sam Hammond wants to go big on alignment research; Congress wants to regulate; Ethan Mollick on LLMs as interns; Gary Marcus on AI threats

Arnold Kling
May 23, 2023
9
Share this post

GPT/LLM links 5/23

arnoldkling.substack.com
14
Share

Sam Hammond writes,

The basic philosophy behind a Manhattan Project for AI safety is proactive rather than reactive. The cost of training large models will continue to plummet as data centers running next generation GPUs and TPUs come online. Absent an implausible, world-wide crackdown on AI research, AGI is coming whether we like it or not. Given this reality, we need to be proactively accelerating alignment and interpretability research. While the future may be hidden behind a dense fog, we could at least turn on our high-beams.

Oy. Sam, have you met our 21st-century government? The only Manhattan Project I want to see is a Manhattan Project to figure out how to get people to stop calling for Manhattan Projects.

Timothy B. Lee reports on the AI hearing where Sam Altman and others testified. Lee concludes.

It might make more sense to wait a year or two and see how AI technology evolves before passing a major bill to regulate AI.

In the meantime, I think the best thing Congress could do is to fund efforts to better understand the potential harms from AI. Earlier this month, the National Science Foundation announced the creation of seven new National Artificial Intelligence Research Institutes focused on issues like trustworthy AI and cybersecurity. Putting more money into initiatives like this could be money well spent.

I’d also love to see Congress create an agency to investigate cybersecurity vulnerabilities in real-world systems. It could work something like the National Transportation Safety Board, the federal agency that investigates plane crashes, train derailments, and the like. A new cybersecurity agency could investigate whether the operators of power plants, pipelines, military drones, and self-driving cars are taking appropriate precautions against hackers.

Lee is suggesting things that agencies could do that have a high probability of making us better off and a low probability of making us worse off. My guess is that highly visible legislative action on AI would have the opposite characteristic.

Ethan Mollick writes,

giving the AI context and constraints makes a big difference. So you are going to tell it, at the start of any conversation, who it is: You are an expert at generating quizzes for 8th graders, you are a marketing writer who writes engaging copy for social media campaigns, you are a strategy consultant. This will not magically make the AI an expert in these these things, but it will tell it the role it should be playing and the context and audience to engage with.

He wisely writes,

The AI you are using is the worst and least capable AI you will ever use.

I think a lot of the improvements will come not so much from having a better LLM but from having apps built on top of the LLM. For example, it would not save me much time to summarize Ethan’s latest post, Scott Alexander’s latest post, and so on, one by one. But having an LLM that can summarize all of the recent posts from every substack I subscribe to—that would be terrific time-saver.

Interviewed by Yascha Mounk, Gary Marcus says,

We need to have the reasoning capacity and the ability to represent explicit information of symbolic AI. And we need the learning from lots of data that we get from neural networks. Nobody's really figured out how to combine the two—I think, in part, because there's been almost a holy war in the field between the people following these two approaches. There's a lot of bitterness on both sides. If we're going to get anywhere, we're going to need to build some kind of reconciliation between these two approaches. 

I would add that we may need more than just those two approaches. I do think that if we just use the neural networks of LLMs, we will asymptote way short of general intelligence.

Overall, I was impressed by this interview. Mounk asks good clarifying questions, and Marcus comes off well, in my view.

Share

Substacks referenced above:

@

Persuasion
Will Humanity Survive AI?
Listen now (60 min) | Gary Marcus is an expert in artificial intelligence, a cognitive scientist and host of the podcast “Humans vs Machines with Gary Marcus.” In this week’s conversation, Yascha Mounk and Gary Marcus discuss the shortcomings of the dominant large language model (LLM) mode of artificial intelligence; why he feels that the AI industry is on the wrong path to d…
Listen now
4 months ago · 10 likes · 1 comment

@

Understanding AI
Congress shouldn't rush into regulating AI
When Silicon Valley executives testify before Congress, they normally get raked over the coals. But OpenAI CEO Sam Altman’s Tuesday appearance before the Senate Judiciary Committee went differently. Senators asked Altman probing questions and listened respectfully to his answers. Afterward, the committee’s chairman, Sen. Richard Blumenthal (D-CT) praised…
Read more
4 months ago · 11 likes · 7 comments · Timothy B Lee

@

Second Best
A Manhattan Project for AI Safety
Last week I called for a Manhattan Project for AI safety in an essay for Politico: As little as two years ago, the forecasting platform Metaculus put the likely arrival of “weak” artificial general intelligence — a unified system that can compete with the typical college-educated human on most tasks — sometime around the year 2040…
Read more
4 months ago · 2 likes · Samuel Hammond
9
Share this post

GPT/LLM links 5/23

arnoldkling.substack.com
14
Share
14 Comments
Share this discussion

GPT/LLM links 5/23

arnoldkling.substack.com
Tom cullis
May 23Liked by Arnold Kling

'The AI you are using is the worst and least capable AI you will ever use.'

I wouldn't be so sure about that. Google search was vastly better 15 years ago and I could find what I specifically wanted quite easily. Now, in large part due to Google's success, the net is filled with junk to such a high degree and the search has been reconfigured over and over again to maximize (short term) revenue that it is actually very difficult to find non generic results. There is some (90%+) chance that the current AIs are going to flood the net with even more derivative nonsense and it is not a given that improvements will outstrip that noise.

I will continue to note that chatGPT gives inaccurate answers frequently, and many of those inaccuracies are tied to the flawed information that already exists. Ask it a question about macro economics and it gives Keynesians/Monetarist responses because those are the popular schools of policy (not thought), despite reams of data that at the very least highlighting the flaws in that thinking. Until it hits a point where it can filter out noise these AIs will potentially be sabotaging themselves and spoiling their own training data.

Expand full comment
Reply
Share
1 reply
founding
Dennis P Waters
May 23

As ChatGPT itself reports:

No, you should not drive with high beams in dense fog. In fact, using high beams in foggy conditions can be very dangerous and reduce visibility even further. High beams are designed to provide better illumination in dark conditions, but in fog, the light from high beams reflects off the water droplets in the air and creates a blinding effect known as "fog glare." This glare can impair your own vision as well as the vision of other drivers on the road.

File under: unintended metaphors

Expand full comment
Reply
Share
2 replies
12 more comments...
Top
New
Community

No posts

Ready for more?

© 2023 Arnold Kling
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing