14 Comments

'The AI you are using is the worst and least capable AI you will ever use.'

I wouldn't be so sure about that. Google search was vastly better 15 years ago and I could find what I specifically wanted quite easily. Now, in large part due to Google's success, the net is filled with junk to such a high degree and the search has been reconfigured over and over again to maximize (short term) revenue that it is actually very difficult to find non generic results. There is some (90%+) chance that the current AIs are going to flood the net with even more derivative nonsense and it is not a given that improvements will outstrip that noise.

I will continue to note that chatGPT gives inaccurate answers frequently, and many of those inaccuracies are tied to the flawed information that already exists. Ask it a question about macro economics and it gives Keynesians/Monetarist responses because those are the popular schools of policy (not thought), despite reams of data that at the very least highlighting the flaws in that thinking. Until it hits a point where it can filter out noise these AIs will potentially be sabotaging themselves and spoiling their own training data.

Expand full comment
founding

As ChatGPT itself reports:

No, you should not drive with high beams in dense fog. In fact, using high beams in foggy conditions can be very dangerous and reduce visibility even further. High beams are designed to provide better illumination in dark conditions, but in fog, the light from high beams reflects off the water droplets in the air and creates a blinding effect known as "fog glare." This glare can impair your own vision as well as the vision of other drivers on the road.

File under: unintended metaphors

Expand full comment

Re Sam Hammond: I don't think I want to take safety tips from a guy whose analogies are unsafe. You don't turn on your high beams in a dense fog, that just reflects bright light back at you making it harder to distinguish the vague shapes through the fog, creating tunnel vision. Actually its a great metaphor for all the AI prognosticators who have no idea where the future is going or what it will look like, but they are damn sure going to spend billions (trillions?) of dollars in the areas they think should be funded. Kathy Woods syndrome anyone? I can't predict the future, but I can put billions of (your) dollars into it!

Expand full comment

If only Government had got involved at the start of the Industrial Revolution to organise things and make sure we didn’t get global warming...

Expand full comment

re: the issue of regulation.

Obviously libertarians grasp that ideally competition is better than regulation. The issue is how that may arise in the context of AI systems. People have objected to the biases of these systems: which suggests the importance of competition to provide the ability to get AIs with different viewpoints, ala the AI pluralism substack:

https://cactus.substack.com/p/announcing-ai-pluralism

Some of the drive for regulation from non-progresssives is due to those biases, concern over woke AI. Except taken further it isn't just politics where people have different views and preferences. Its religion, preferences on writing coaching and many other things. Most use of AI to generate words will take place in the office suites (and email programs) that tend to be locked into the big players.

The problem is: Many companies have tried to compete with existing office suites but MS and Google have those locked up. So the issue is: convincing MS and Google to allow plug-ins for AIs in their software to replace them with 3rd party AIs to allow competition to flourish. There could be AIs with different pros and cons for different tasks, and ideally a market to allow for that and that provides regulation-by-market.

Smartphone vendors grasp that they can't provide all the potential utility to their users so they open it up with app stores to allow 3rd parties: the question is if the big players can be persuaded to do the same for their office suites and other software to allow AI plug-ins.

This page has more about it, and provides arguments against regulation as being in essence regulation of speech. It starts with a concerning issue:

https://preventbigbrother.com/

"Most written communication is created using computers, with the office software suites used to create documents provided almost entirely by either Google (60.23%) or Microsoft (39.66%), who also provide the top 2 search engines people use to find the information others have created. Each of those companies is working to add AI assistance to their document creation software, search engines and the email programs people use to communicate. Unless something changes, soon there may be 2 AI vendors guiding the creation of most of the written words humanity produces.

Although these systems have not been around long enough for much study to be done on the impact AI assistance has on the writing process, one early study on “Co-Writing with Opinionated Language Models Affects Users’ Views” found that when the AI systems themselves had opinions on a topic they subtly nudged people’s views:

" Using the opinionated language model affected the opinions expressed in participants’ writing and shifted their opinions in the subsequent attitude survey. […] Further, based on our post-task survey, most participants were not aware of the model’s opinion and believed that the model did not affect their argument. "

Their experiments were very short writing tests, and yet the systems were still able to steer people during that brief time. Nudges can be difficult to spot if for instance the system does not always lean the same direction, but just steers one way a little more often than the other. If bias leads an AI to merely not mention certain facts, it can go unseen.

Imagine the potential impact of nudges over the course of months or years, like dental braces nudging teeth into position over a long period of time. When the process can go on for a long period of time rather than merely a few moments, the influence may be subtler and less noticeable even to someone looking for it. AI systems from 2 vendors may nudge viewpoints of the vast majority of the world’s written words, regardless of whether their creators intended them to.

[...]

Does an entity that monitors the words produced by billions of people and steer their beliefs sound like the accidental rise of Big Brother? China is already working to ensure any AI systems there will only produce government approved opinions. Eventually AI systems will be teaching children who are even more easily influenced, not merely adults.

[Further down you can read what AI thinks the person who warned us about Big Brother, George Orwell, would say about AI regulation.]

AIs in the free world are not quite like Big Brother. There was no escape from being under the control of Big Brother. In contrast, in theory you can choose which of these Little Brothers you wish to use that are best trained to serve your needs, rather than what serves the needs of a Big Brother. That are limited options currently, but there are paths towards making that a realistic approach rather than wishful thinking about a problematic situation."

Expand full comment

"Lee is suggesting things that agencies could do that have a high probability of making us better off and a low probability of making us worse off. My guess is that highly visible legislative action on AI would have the opposite characteristic.'

I'm sympathetic with Kling's pessimism and the idea government always screws up but yet somehow there are a lot of things that mostly work only because government doesn't totally screw it up. Maybe most people in government are far from optimal but some are pretty darn amazing. Even those often don't make a difference but again I'd say somehow our government works pretty well despite all it's flaws. It seems worthwhile remembering that and maybe thinking about how it works as well as it does.

Expand full comment

A cybersecurity agency tasked with finding and helping fix vulnerabilities in critical infrastructure would be a terrific idea. We could call it... the National Security Agency.

Seriously, Bruce Schneier has been banging the drum for years about how the NSA has all the tools to be a great defensive force, and pointing out that its focus on offensive capabilities is shortsighted, especially when it hides vulnerabilities it discovers so it can use them offensively. He's still right, and I hope Tim Lee saying the same thing makes it more likely that policymakers listen to him, but I'm not holding my breath.

Expand full comment

Be grateful that they don’t call for a Moonshot.

Expand full comment

As pointed out above, ChatGPT doesn’t deliver anything new. It adds no insights. It’s the search equivalent of a Monkey’s Paw. Search has always been that, but ChatGPT adds a veneer of plausibility for naive users.

Expand full comment

"I think a lot of the improvements will come not so much from having a better LLM but from having apps built on top of the LLM. For example, it would not save me much time to summarize Ethan’s latest post, Scott Alexander’s latest post, and so on, one by one. But having an LLM that can summarize all of the recent posts from every substack I subscribe to—that would be terrific time-saver."

This implies advertiser-based models will accelerate their transition to subscription-based models.

Expand full comment