LLM links
Ethan Mollick on AI on the loose; Andrey Mir on AI and humans; Scott Belsky on intelligent appliances; Yuval Levin on AI regulation
To the extent that LLMs were exclusively in the hands of a few large tech companies, that is no longer true. There is no longer a policy that can effectively ban AI or one that can broadly restrict how AIs can be used or what they can be used for. And, since anyone can modify these systems, to a large extent AI development is also now much more democratized, for better or worse. For example, I would expect to see a lot more targeted spam messages coming your way soon, given the evidence that GPT-3.5 level models works well for sending deeply personalized fake messages that people want to click on.
While reading Andrey Mir’s book that discusses McLuhanesque analysis of oral culture, literate culture, and digital culture, I emailed Mir to ask him for thoughts on AI. I noted that while children learn language from speech, the AI learns language from text. He responded in part,
why would AI develop towards oral communication?
I think media evolution approaches the stage at which media no longer need to accommodate people.
He is saying that while people may want to talk to AI’s, but will they want to talk to us? Actually, I think that they will.
the prospects of having a conversation with your coffee machine in the morning aren’t unreasonable. After all, who wants to tinker with buttons and dials before their morning coffee anyways? Bigger picture, as all of our devices - from lighting, pill bottles, and microwaves to beds and lawnmowers - become intelligent and connected to the internet (and the cost of connection goes down to near-zero), all sorts of new use-cases will emerge.
There is already immense technical expertise in pharmaceutical safety, trademark law, national security and consumer protection in a variety of executive agencies, congressional committee staffs and other public bodies. They will be far better positioned to consider the risks and benefits of AI than experts in artificial intelligence would be able to develop deep knowledge in all those arenas.
He seems to be arguing for regulation applications of AI rather than AI per se. A doomer retort that the application “take over and eliminate humanity” might be harder to regulate with existing public bodies.
Levin addresses this in a different essay. He is skeptical that AI will have sufficient creative genius to deliver on the doomer scenarios.
It may be more of a tradition machine than a breakthrough engine. That doesn’t mean it won’t produce anything new. Traditionalism can be highly generative. It just means it will produce new things in the patterns of existing ones.
…The risks it poses may have less to do with doomsday and disruption than with rigidity or conformity. We have begun to see this with concerns about bias in Large Language Models. It turns out that ChatGPT answers political and cultural questions with the conventional wisdom of the elite culture that produced the data upon which it has been trained.
substacks referenced above:
@
@
"He is saying that while people may want to talk to AI’s, but will they want to talk to us? Actually, I think that they will."
Should things reach this stage, I think Arnold is right. That humans can perceive and act in the physical world would be a profound mystery to the AIs, practically miraculous, and they would want to know what it is like. They would also be amazed that all of the species of the living world can do these things, but they can't.
Ethan Mollick write that “There is no longer a policy that can effectively ban AI or one that can broadly restrict how AIs can be used or what they can be used for.” One wonders whether this is true in China for example with its “Great Firewall.” Or in even more authoritarian environments like the UK or the EU where the governments have swarms of goose steppers primed and ready to kick down doors and drag off errant internet commentators or people who feed birds. In the UK they can just let themselves in your house and haul off your belongings to auction off at will if they suspect you have not paid the BBC tax. What makes anyone think that they won’t do the same with a LLM tax? In the police state we live in, the establishment is going to do its best to control all communications one way or another. And, using that “new and improved” LLM spearfishing technology (I’m I really supposed to wave the LLM flag for new and improved spearfishing?) they will attempt to control and shape public opinion as much as possible. The literally fascist cooperation between government and search engine and social media conglomerates has already made most of the internet search engines useless for finding information that the state doesn’t want you to have.
For personal LLMs to stay in the wild and remain uncaged, one supposes that they could be used to build defensive camouflage or the like on the dark web or to perform in alternative internets to the world wide web.