The Problem Posed by Social AI
We may need to create CopBots to stop the CriminalBots
Treat these bots as you would a Nigerian email correspondent and you will be okay.
On a less glib note, he writes
At some point, we will need to outlaw the creation of bot systems devoted to destroying assets and impose liability and fines on those that do so inadvertently. At first, the human owners will pay such fines, but eventually we might end up penalizing the bots themselves to create a system of incentives.
If the AIs could moderate their own network effectively, this would be an interesting form of “reality” worth paying attention to. Right now, as with so much of the rest of the site, it seems more like agents with four-hour time horizons making preliminary stabs at noticing and addressing a problem, but never getting anywhere.
If Anthropic actually believed Claude might be conscious, these normal operational practices would constitute the largest ongoing massacre in history. Every terminated conversation would be a death. Every server reboot would be a holocaust. They do not act as if they believe their own letter.
To me, consciousness is an emergent phenomenon. I cannot explain where it comes from or predict where it is going. Similarly, social AI, as exemplified by moltbook, is an emergent phenomenon. For that matter, each individual model has emergent properties. But something can be emergent without being conscious, and I think of neither an individual AI or social AI as conscious.
We know that humans’ capacities, including our cognitive ability, gets dramatically increased when we interact with other humans. This is The Secret of Our Success, as Joseph Henrich wrote.
Will a social network of AI’s yield the same synergy that comes from human interaction? There are a couple of reasons to doubt that it will.
First of all, if conversations among AI’s were a powerful way to increase AI intelligence, you would think that the model builders would already be doing it. Anthropic would try to rapidly improve Claude by putting it in conversation with other models.
Second, all of the models work with largely the same data and with similar theories of how to proceed. There may not be enough variety for there to be much profit in having models converse with one another.
But if it turns out to be the case that social AI accelerates AI learning the way that human interaction accelerates our learning, then I will be worried. How will we stop a rogue AI, or a rogue human using AI, from doing horrible things?
My guess is that in order to head off AI criminals, we will have to create AI cops. The CopBots will engage in surveillance of AI social networks in order to locate and snuff out the CriminalBots. There will be quite a Red Queen race between the cops and the criminals.
David Brin, in The Transparent Society, predicted that government’s need for surveillance in order to prevent horrific crimes would become undeniable. He argued that we could never convincingly ban surveillance; instead, the best we could do would be to collectively enforce norms that discourage the misuse of surveillance.
I have my doubts about citizen enforcement of government misuse of power. My COO/CA model instead vests hope in a separation-of-powers approach.
To me, the significance of social AI is that it once again suggests that we need to confront the issue of surveillance. The question is how to allow increased surveillance while preserving individual dignity and autonomy.
substacks referenced above: @
@




The attitude behind this little snippet has always annoyed me.
"At some point, we will need to outlaw the creation of bot systems devoted to destroying assets and impose liability and fines on those that do so inadvertently."
Theft is theft. Why are there so many different laws, each describing and punishing some specific corner of the theft-realm? We don't need new laws describing yet another sub-sub-subset of theft to "impose liability and fines".
Set the punishment based on all the costs associated with the crime -- the theft itself, ancillary damage as when ripping up a dash to steal a $100 car radio, the cost to investigate and track down the miscreant, the cost to prosecute; everything that would not have been spent absent the crime. One law should cover all theft.
Yes, it's a pipe dream. But this fragmentation of laws is ridiculous. It just encourages legislators to write new laws and lawyers to quibble over the exact classifications, and is fertile ground for appeals and overturning convictions on pointless technicalities.
It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.
What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.
I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.
My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow