17 Comments
User's avatar
Chartertopia's avatar

The attitude behind this little snippet has always annoyed me.

"At some point, we will need to outlaw the creation of bot systems devoted to destroying assets and impose liability and fines on those that do so inadvertently."

Theft is theft. Why are there so many different laws, each describing and punishing some specific corner of the theft-realm? We don't need new laws describing yet another sub-sub-subset of theft to "impose liability and fines".

Set the punishment based on all the costs associated with the crime -- the theft itself, ancillary damage as when ripping up a dash to steal a $100 car radio, the cost to investigate and track down the miscreant, the cost to prosecute; everything that would not have been spent absent the crime. One law should cover all theft.

Yes, it's a pipe dream. But this fragmentation of laws is ridiculous. It just encourages legislators to write new laws and lawyers to quibble over the exact classifications, and is fertile ground for appeals and overturning convictions on pointless technicalities.

Charles Pick's avatar

It isn't necessarily a bad thing to have redundant civil laws in general although I agree with you in this specific instance. Generally you cannot collect on the same type of damages more than once. You can plead several different theories of relief, but it does not necessarily entitle you to additional damages. In such situations it may streamline the legal process to have a specific type of AI tort. This exists in many different industries. You plead the relevant generic torts and then the alternative statutory theories of relief.

Where this gets wobbly and a little absurd is in the criminal realm in which all the statute-writing makes it so that one minor crime can inflate into many felonies, such that it is "better" to shoot someone than it is to trip over some moronic felony related to something that the defendant said or the manner in which they said it, such as misreporting the length of a fish you caught to a federal agent becoming felony obstruction of justice.

Chartertopia's avatar

Yes, redundant and overlapping laws are bad, always. The Rule of Law myth holds that laws are written down so that people can know what is forbidden, mandated, and permitted. Overlapping and redundant laws add confusion, not clarity, turn judicial systems into playgrounds for quibbling lawyers, and make a mockery of the Rule of Law.

Grant Castillou's avatar

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow

Gian's avatar

Consciousness as brought into being by evolution may

(1) be highly contingent product of evolutionary process

(2) Be substrate-dependent

If these two hold, then artificial consciousness may be a pipe dream.

Grant Castillou's avatar

I don't know if a conscious machine can be built. Maybe consciousness requires a biological basis. But if it can, it will have to be based on the only thing we know creates consciousness, the biological brain and body acting in the environment from conception until death; the brain is embodied, and the body is embedded in the environment. The TNGS is committed to this principle, as exemplified by the Darwin automata. The almost infinitely complex problem of how the biological brain and body produces consciousness could produce an infinity of partially or completely incorrect theories. I suppose it's a tribute to the human brain that it's produced such an enormous amount of unverifiable conjecture since its language abilities came to full fruition. I hope the extended TNGS is verifiable. The only hope of proof is a conscious machine, imo.

Tom Grey's avatar

Sorry, I’m sure this not quite right. For any test of consciousness, it will be possible to create a non-conscious simulator of consciousness which passes, a false positive.

There are times, like sleeping, when humans are not conscious.

Tho, because I want a highly capable aigent without human rights, it might be desire leading to this rational claim.

Grant Castillou's avatar

My hope is that immortal conscious machines could accomplish great things with science and technology, such as curing aging and death in humans, because they wouldn't lose their knowledge and experience through death, like humans do. If they can do that, I don't care if humans consider them conscious or not.

gas station sushi's avatar

Apple just sent me $8.02 to settle a Siri eavesdropping lawsuit. I’m sure that the plaintiffs bar will be busy enough with the AI companies to ensure that our privacy remains reasonably secure. In short, the lawyers will be the AI cops.

Tom Grey's avatar

Part of the AI laws will be requirements for complete logs of ai-person (@some IP/ login; realistically smartphone number which some human pays for). CopBots will be constantly reviewing these logs for clues.

AI companies who create aigents able & willing to assist in committing crimes need to have some, or most or all, liability for crimes committed with ai assistance.

The race between using ai for crime and to solve crime will lead to ever more powerful Gen Intel in the aigent society. Mostly good, some bad.

Carter Williams's avatar

Do bots help with the coordination problem? Detecting latent demand. And improving Shannon's SNR?

commenter's avatar

Having received fraudulent 1099 forms this year and having had to go through the hassle of attempting to get corrections, AI enabled identity theft certainly seems a hot and relevant topic. The number of identity theft cases grew to an all time high in 2025, so I asked the AI browser about it:

"AI is increasingly being used by criminals to commit identity theft through sophisticated, scalable methods. Fraudsters leverage generative AI to create synthetic identities by combining real personal data (like SSNs or addresses) with fabricated details, enabling them to open accounts, secure loans, and commit financial fraud with high success rates.

Key AI-Driven Tactics in Identity Theft

Synthetic Identity Fraud: AI automates the creation of fake identities using real and fake data, often enhanced with deepfake images, forged documents, and realistic biometric data. These identities mimic real people, making detection difficult.

Deepfakes and Voice Cloning: AI generates hyper-realistic videos, audio, and images to impersonate victims or trusted contacts (e.g., family, bosses) in scams. These are used in romance fraud, investment scams, and ransom demands.

Automated Social Engineering: AI-powered chatbots and language models craft personalized, convincing phishing messages at scale, bypassing grammar checks and mimicking human behavior to trick victims into revealing passwords or sending money.

Document Forgery: AI creates fake driver’s licenses, passports, and utility bills that pass visual inspections, helping fraudsters bypass identity verification systems.

Account Takeover (ATO): AI tools automate credential-stuffing attacks, testing stolen login details across platforms to gain unauthorized access to accounts.

Growing Threat and Impact

42.5% of detected fraud attempts now involve AI, with deepfake fraud rising 2,137% in three years.

Synthetic identity fraud caused an estimated $35 billion in losses in 2023, and is the fastest-growing financial crime in the U.S.

AI fraud agents, which use generative AI, automation, and reinforcement learning, are emerging and can interact with verification systems in real time—making them harder to detect.

How to Protect Yourself

Verify identities through secret phrases or secondary contact methods.

Check for subtle flaws in images/videos (e.g., distorted hands, unnatural eyes, lagging movements).

Avoid sharing sensitive info with online contacts, especially via voice or video.

Use strong, unique passwords and multi-factor authentication (MFA).

Limit public sharing of your image, voice, or personal details online.

Report suspicious activity to the FBI’s IC3 at www.ic3.gov. "

Actually I think minimizing Internet exposure of any personal information will be the center piece of my strategy, but if AI completely destroys Internet commercial transactions and interfaces it wouldn't be the end of the world. Government and business really should be working on a fallback plan for when the Internet finally does collapse under the weight of AI generated fraud.

Charles Powell's avatar

We see AI everywhere but in the productivity statistics.

dotyloykpot's avatar

Agents will just hire humans to take the liability. Will "work" but not as intended

Yancey Ward's avatar

This essay is missing "Have a nice day."

Kurt's avatar

It's called "wall dancing" in China, as in the Great Firewall...the endless innovation and improvisation to be who you are and still function within heavily surveilled and censored environments.

Handle's avatar

I don't think AI's will be open to sharing valuable skills they've obtained with other AI's, not even exchanging them for high prices. Instead they will just want to be paid for being the sole or top provider performing their expert services. If shared / "taught", it's not like teaching a human student or apprentice who is just one extra laborer in a giant pool and who takes a long time to learn the skill and establish a reputation and competitiveness. But for digital systems, the marginal cost of copying that skill is near zero and so it will be immediately scaled to satisfy so much demand until marginal revenue falls just as much, making it hard to recoup potentially gigantic fixed R&D costs. Entertainment and copyright material / IP sectors face similar problems. The trend among human culture has been towards more protection of more tech trade secrecy, private info is increasingly valuable, protected, as the secrecy of it is the basis of the firms advantage.