19 Comments
User's avatar
Chartertopia's avatar

The attitude behind this little snippet has always annoyed me.

"At some point, we will need to outlaw the creation of bot systems devoted to destroying assets and impose liability and fines on those that do so inadvertently."

Theft is theft. Why are there so many different laws, each describing and punishing some specific corner of the theft-realm? We don't need new laws describing yet another sub-sub-subset of theft to "impose liability and fines".

Set the punishment based on all the costs associated with the crime -- the theft itself, ancillary damage as when ripping up a dash to steal a $100 car radio, the cost to investigate and track down the miscreant, the cost to prosecute; everything that would not have been spent absent the crime. One law should cover all theft.

Yes, it's a pipe dream. But this fragmentation of laws is ridiculous. It just encourages legislators to write new laws and lawyers to quibble over the exact classifications, and is fertile ground for appeals and overturning convictions on pointless technicalities.

Charles Pick's avatar

It isn't necessarily a bad thing to have redundant civil laws in general although I agree with you in this specific instance. Generally you cannot collect on the same type of damages more than once. You can plead several different theories of relief, but it does not necessarily entitle you to additional damages. In such situations it may streamline the legal process to have a specific type of AI tort. This exists in many different industries. You plead the relevant generic torts and then the alternative statutory theories of relief.

Where this gets wobbly and a little absurd is in the criminal realm in which all the statute-writing makes it so that one minor crime can inflate into many felonies, such that it is "better" to shoot someone than it is to trip over some moronic felony related to something that the defendant said or the manner in which they said it, such as misreporting the length of a fish you caught to a federal agent becoming felony obstruction of justice.

Chartertopia's avatar

Yes, redundant and overlapping laws are bad, always. The Rule of Law myth holds that laws are written down so that people can know what is forbidden, mandated, and permitted. Overlapping and redundant laws add confusion, not clarity, turn judicial systems into playgrounds for quibbling lawyers, and make a mockery of the Rule of Law.

Grant Castillou's avatar

It's becoming clear that with all the brain and consciousness theories out there, the proof will be in the pudding. By this I mean, can any particular theory be used to create a human adult level conscious machine. My bet is on the late Gerald Edelman's Extended Theory of Neuronal Group Selection. The lead group in robotics based on this theory is the Neurorobotics Lab at UC at Irvine. Dr. Edelman distinguished between primary consciousness, which came first in evolution, and that humans share with other conscious animals, and higher order consciousness, which came to only humans with the acquisition of language. A machine with only primary consciousness will probably have to come first.

What I find special about the TNGS is the Darwin series of automata created at the Neurosciences Institute by Dr. Edelman and his colleagues in the 1990's and 2000's. These machines perform in the real world, not in a restricted simulated world, and display convincing physical behavior indicative of higher psychological functions necessary for consciousness, such as perceptual categorization, memory, and learning. They are based on realistic models of the parts of the biological brain that the theory claims subserve these functions. The extended TNGS allows for the emergence of consciousness based only on further evolutionary development of the brain areas responsible for these functions, in a parsimonious way. No other research I've encountered is anywhere near as convincing.

I post because on almost every video and article about the brain and consciousness that I encounter, the attitude seems to be that we still know next to nothing about how the brain and consciousness work; that there's lots of data but no unifying theory. I believe the extended TNGS is that theory. My motivation is to keep that theory in front of the public. And obviously, I consider it the route to a truly conscious machine, primary and higher-order.

My advice to people who want to create a conscious machine is to seriously ground themselves in the extended TNGS and the Darwin automata first, and proceed from there, by applying to Jeff Krichmar's lab at UC Irvine, possibly. Dr. Edelman's roadmap to a conscious machine is at https://arxiv.org/abs/2105.10461, and here is a video of Jeff Krichmar talking about some of the Darwin automata, https://www.youtube.com/watch?v=J7Uh9phc1Ow

Gian's avatar

Consciousness as brought into being by evolution may

(1) be highly contingent product of evolutionary process

(2) Be substrate-dependent

If these two hold, then artificial consciousness may be a pipe dream.

Roger Sweeny's avatar

I think 100% human consciousness in a machine is a pipe dream. But a machine consciousness that is similar, that's something else.

As in so many things, I think we are led astray by assuming (without really thinking about it), that there is one and only one "consciousness", that there is some Platonic ideal of "consciousness" and that we are an instantiation of it.

Grant Castillou's avatar

I don't know if a conscious machine can be built. Maybe consciousness requires a biological basis. But if it can, it will have to be based on the only thing we know creates consciousness, the biological brain and body acting in the environment from conception until death; the brain is embodied, and the body is embedded in the environment. The TNGS is committed to this principle, as exemplified by the Darwin automata. The almost infinitely complex problem of how the biological brain and body produces consciousness could produce an infinity of partially or completely incorrect theories. I suppose it's a tribute to the human brain that it's produced such an enormous amount of unverifiable conjecture since its language abilities came to full fruition. I hope the extended TNGS is verifiable. The only hope of proof is a conscious machine, imo.

Tom Grey's avatar

Sorry, I’m sure this not quite right. For any test of consciousness, it will be possible to create a non-conscious simulator of consciousness which passes, a false positive.

There are times, like sleeping, when humans are not conscious.

Tho, because I want a highly capable aigent without human rights, it might be desire leading to this rational claim.

Grant Castillou's avatar

My hope is that immortal conscious machines could accomplish great things with science and technology, such as curing aging and death in humans, because they wouldn't lose their knowledge and experience through death, like humans do. If they can do that, I don't care if humans consider them conscious or not.

Kurt's avatar

It's called "wall dancing" in China, as in the Great Firewall...the endless innovation and improvisation to be who you are and still function within heavily surveilled and censored environments.

gas station sushi's avatar

Mongolian AI is going to be next level in terms of its destructiveness. A sleeping giant will finally awaken to resume the Asian throne.

gas station sushi's avatar

Apple just sent me $8.02 to settle a Siri eavesdropping lawsuit. I’m sure that the plaintiffs bar will be busy enough with the AI companies to ensure that our privacy remains reasonably secure. In short, the lawyers will be the AI cops.

carter2099's avatar

> How will we stop a rogue AI, or a rogue human using AI, from doing horrible things?

> My guess is that in order to head off AI criminals, we will have to create AI cops.

This is just an abstraction of cybersecurity today. The same tech exists for both the red and the blue team: it’s always been a cat and mouse game

Tom Grey's avatar

Part of the AI laws will be requirements for complete logs of ai-person (@some IP/ login; realistically smartphone number which some human pays for). CopBots will be constantly reviewing these logs for clues.

AI companies who create aigents able & willing to assist in committing crimes need to have some, or most or all, liability for crimes committed with ai assistance.

The race between using ai for crime and to solve crime will lead to ever more powerful Gen Intel in the aigent society. Mostly good, some bad.

Carter Williams's avatar

Do bots help with the coordination problem? Detecting latent demand. And improving Shannon's SNR?

Charles Powell's avatar

We see AI everywhere but in the productivity statistics.

dotyloykpot's avatar

Agents will just hire humans to take the liability. Will "work" but not as intended

Yancey Ward's avatar

This essay is missing "Have a nice day."

Handle's avatar

I don't think AI's will be open to sharing valuable skills they've obtained with other AI's, not even exchanging them for high prices. Instead they will just want to be paid for being the sole or top provider performing their expert services. If shared / "taught", it's not like teaching a human student or apprentice who is just one extra laborer in a giant pool and who takes a long time to learn the skill and establish a reputation and competitiveness. But for digital systems, the marginal cost of copying that skill is near zero and so it will be immediately scaled to satisfy so much demand until marginal revenue falls just as much, making it hard to recoup potentially gigantic fixed R&D costs. Entertainment and copyright material / IP sectors face similar problems. The trend among human culture has been towards more protection of more tech trade secrecy, private info is increasingly valuable, protected, as the secrecy of it is the basis of the firms advantage.