LLM links
Ethan Mollick on AI on the loose; Andrey Mir on AI and humans; Scott Belsky on intelligent appliances; Yuval Levin on AI regulation
To the extent that LLMs were exclusively in the hands of a few large tech companies, that is no longer true. There is no longer a policy that can effectively ban AI or one that can broadly restrict how AIs can be used or what they can be used for. And, since anyone can modify these systems, to a large extent AI development is also now much more democratized, for better or worse. For example, I would expect to see a lot more targeted spam messages coming your way soon, given the evidence that GPT-3.5 level models works well for sending deeply personalized fake messages that people want to click on.
While reading Andrey Mir’s book that discusses McLuhanesque analysis of oral culture, literate culture, and digital culture, I emailed Mir to ask him for thoughts on AI. I noted that while children learn language from speech, the AI learns language from text. He responded in part,
why would AI develop towards oral communication?
I think media evolution approaches the stage at which media no longer need to accommodate people.
He is saying that while people may want to talk to AI’s, but will they want to talk to us? Actually, I think that they will.
the prospects of having a conversation with your coffee machine in the morning aren’t unreasonable. After all, who wants to tinker with buttons and dials before their morning coffee anyways? Bigger picture, as all of our devices - from lighting, pill bottles, and microwaves to beds and lawnmowers - become intelligent and connected to the internet (and the cost of connection goes down to near-zero), all sorts of new use-cases will emerge.
There is already immense technical expertise in pharmaceutical safety, trademark law, national security and consumer protection in a variety of executive agencies, congressional committee staffs and other public bodies. They will be far better positioned to consider the risks and benefits of AI than experts in artificial intelligence would be able to develop deep knowledge in all those arenas.
He seems to be arguing for regulation applications of AI rather than AI per se. A doomer retort that the application “take over and eliminate humanity” might be harder to regulate with existing public bodies.
Levin addresses this in a different essay. He is skeptical that AI will have sufficient creative genius to deliver on the doomer scenarios.
It may be more of a tradition machine than a breakthrough engine. That doesn’t mean it won’t produce anything new. Traditionalism can be highly generative. It just means it will produce new things in the patterns of existing ones.
…The risks it poses may have less to do with doomsday and disruption than with rigidity or conformity. We have begun to see this with concerns about bias in Large Language Models. It turns out that ChatGPT answers political and cultural questions with the conventional wisdom of the elite culture that produced the data upon which it has been trained.
substacks referenced above:
@
@
"He is saying that while people may want to talk to AI’s, but will they want to talk to us? Actually, I think that they will."
Should things reach this stage, I think Arnold is right. That humans can perceive and act in the physical world would be a profound mystery to the AIs, practically miraculous, and they would want to know what it is like. They would also be amazed that all of the species of the living world can do these things, but they can't.
Imagine a young boy growing up and learning from an AI mentor that has always been right about every question he has asked it. As a result, this boy has come - over say a decade - to trust and respect the AI, maybe even more so than his own parents in some way that’s hard for us to comprehend right now.
Now imagine that this boy, now a teenager wants to learn more about politics. His parents are devoutly progressive. He asks the AI a question about some controversial political topic, and the AI provides an answer that contradicts his parents’ narrative.
He now faces a difficult choice. He doesn’t want to disagree with the AI, for he has come to trust and respect it like a wise mentor. He doesn’t want to disagree with his parents because he loves them, and disagreeing with them on this particular topic is so harmful to his relationship with him, that it may sever the relationship.
Who does he come to side with, the AI or his parents?
Complicating this situation is that he yearns for love and respect from the AI, but he wonders whether the AI actually loves him. Will he be ashamed to disagree with the AI?