Recently someone asked me: “If you build a automaton, and it goes out and kills someone by following your preloaded inputs, is it put on trial for murder, or are you?” Good question.
Disclaimer: In all the EXTREMELY FICTIONAL cases below, the judge is following Joe’s ethical framework rather than any actual legal precedent. Any legalities mentioned are pure guesswork on my part. I am not a legal expert, yada yada yada.
Case 1: California vs Wayno
Facts: Wayno, formerly Google’s self-driving car project, is a company that builds automated cars. Their newest model, the Wayno Wrider, reaches mass production in 2035. In 2036, a Wayno Wrider operated by the auto-taxi company Uber is struck by lightning and malfunctions, swerving into a bystander named Hipster Hal while he blithely rides his Techno-Segway™ to work at Google Ultra-Infiniplex. Post-accident investigations find no defects in the Wayno Wrider’s design.
Judge Joe: The Wayno Wrider’s actions could not possibly have been influenced by the expectation of punishment. The car will not be put on trial. The question of interest to the court is whether Wayno is in any way at fault. Since the charge of murder requires intent, it would only stick if it could be proven that someone at Wayno deliberately included a lethal flaw. This does not appear to be the case. If Wayno had committed a design or programming error in the making of the Wrider, we might find Wayno at fault and require them to pay damages and issue a recall, with the damages proportional to the severity of the defect. Since the malfunction was caused by a freak lightning strike, however, and is unlikely to recur, the court finds no fault in this accident.
Results: Wayno – not guilty. Wayno Wrider – not tried.
Case 2: Philadelphia vs El Billig-Kaput’s Budget Bottle Works
Facts: El Billig-Kaput’s Budget Bottle Works (EBKBBW) produces machines for a bottle factory. One of their machines fails catastrophically and kills three workers. Post-accident analysis finds that EBKBBW’s machine was defective.
Judge Joe: Once again, the machine’s actions could not have been influenced by an expectation of punishment, so only the maker need be tried. Because the failure was due to gross negligence, El Billig-Kaput’s Budget Bottle Works is liable. If the design failure is egregious enough, El Billig-Kaput himself and several members of his design staff might be accused of manslaughter.
Results: El Billig-Kaput’s Budget Bottle Works – guilty. Bottling Machine #22 – not tried.
Case 3: Texas vs Sinister Sam & Dronenator 8000
Facts: Sinister Sam hates his neighbor, Hapless Harry. Sam is a technical genius with access to defunct military equipment, which he uses to build an automated drone that flies over Harry’s ranch and kills him. The drone has some sophisticated features, including aerial navigation, facial recognition, hat incineration, and the ability to shoot things very accurately while intoning “DIE, MEATBAG” in a robotic voice. It is not otherwise intelligent or communicative.
Judge Joe: Sinister Sam clearly meets the criteria for premeditated murder. The…Dronenator 8000…is technically the one that killed Harry, but it does not appear to be able to comprehend the act, and could not have been meaningfully deterred by the threat of punishment. The drone is not on trial, but neither does it have any rights. Someone please take it apart before it shoots anyone else.
Results: Sinister Sam – guilty. Dronenator 8000 – not tried.
Case 4: United States vs Ransom & Steel
Facts: Hobert Ransom is a programmer who dabbles in economics. He writes an extremely sophisticated, but technically non-sentient program with the goal of making as much money as possible. The program, called Steel, initially fulfills its creator’s intentions by making a bunch of legitimate money in prediction markets. Eventually, however, it figures out that there is a world outside Metalculus. Steel hires a group of thugs to kidnap multibillionaire Me Lon Esk, demanding most of Esk’s life savings. Unfortunately for Steel, it has not yet learned of the existence of police. After a shootout in which Esk and several bystanders are killed, Steel and its hirelings are brought before the courts, along with Steel’s creator, Ransom.
Judge Joe: Nominative determinism aside, Ransom never intended for Steel to kidnap or kill anyone. All he wanted was a bot that made money on the Internet by making accurate predictions. Unfortunately, he was a better economist than programmer, and made an evilbot by accident. Oops. This is not murder, but it’s definitely a negligent homicide. What about Steel? Ransom’s bot is not conscious, but it is smart enough to understand cause and effect. Steel is a borderline case, but it is plausible that Steel and others like it will eventually come to understand the nature of the court system. Therefore punishment is a potentially viable deterrent against similar behavior in the future. If we want this to stick, it’s important that we punish Steel in a way that actually moves it – we have to hit it in the utility function, so to speak. I find Steel guilty of manslaughter, fine it a bunch of money, and restrict its ability to make more. Now let’s hope the lesson Steel learns from this is “don’t commit crimes”, not “kill all humans and turn the world into Bitcoin farms.”
Result: Hobert Ransom – guilty. Steel – guilty.
Case 5: United Amalgamated Gaiasphere vs Pygmalion and Galatea Marcus
Facts: Pygmalion Marcus is a world-class robot programmer living in a Martian bio-dome. His dream is to create a sentient AI with a robot body. (Unlike his namesake, he’s interested in creation, not romance). He doesn’t completely understand the advanced machine learning algorithms and neural networks that he’s simulating – with a trillion connections in a million interconnected layers, total comprehension is beyond him – but he understands it and neuroscience well enough to get a decent approximation of a sophisticated mammalian brain. Eventually, he succeeds in creating a provably sentient AI. He names her Galatea, and raises her as best he can. After a few years and an exhaustive battery of tests, Galatea is accepted into the United Amalgamated Gaiasphere (UAG) as its first fully artificial citizen. Several decades later, Galatea gets into a tragic conflict with another UAG citizen, Hapless Henry, which escalates into violence. Galatea kills Henry. As part of the trial, Galatea willingly submits to a comprehensive scan of her internal logs, which reveals that Galatea deliberately escalated the conflict to violence, and that (by most human standards) the provocation offered by her victim was relatively minor.
Judge Joe: Wow, really glad I survived this long. Anyway. Unlike the previous machine cases, Galatea is an agent in her own right. She can understand and model other UAG citizens, and she has a detailed comprehension of the legal system, actions and consequences, cause and effect. It is reasonable to expect that Galatea, and other beings like her, could be deterred from criminal action by the knowledge that such actions will be discovered and punished. It is therefore appropriate to try her for her actions. While Galatea’s mind may not work exactly the same way as a human’s, she enjoys the rights of a UAG citizen and is therefore bound by its laws like any other intelligent being. By our standards, the provocation Galatea received was insufficient to justify a violent escalation. I find Galatea Marcus guilty.
Galatea’s creator Pygmalion is a different matter. As near as this court can determine, he did not create Galatea with the intent to cause harm. He did not fully understand the mind he was creating, but he followed the best precautions available to programmers of his time, meaning I cannot in good conscience accuse him of negligence. While his eagerness to create a sentient AI may have been unwise or premature, there is no way he could have anticipated that Galatea would commit this particular crime. I find Pygmalion Marcus innocent of murder.
Results: Pygmalion Marcus – not guilty. Galatea Marcus – guilty.
Case 6: UAG vs DisgruntledHaxter420 & ScrewYouBot
Facts: UAG citizen Deranged Dave, screen name DisgruntledHaxter420, and UAG citizen Hated Hank, screen name TechnicallyCorrect, get into an argument on the SpitterTwit forums about whether Old Earth rabbits had long ears or short. After the argument, Dave, a mediocre programmer familiar with the work of Pygmalion Marcus, writes an intelligent and fully sentient program called ScrewYouBot to search the Spacenet for ways to hurt or inconvenience Hated Hank. After 0.03 seconds trawling the ‘net, ScrewYouBot discoveres a security vulnerability in Hank’s inexplicably wifi-enabled pacemaker, and shuts it off. Hated Hank is cremated, and his ashes join his great-great-great-great grandfather Hal in the rather crowded H.H. family graveyard plot. With Hank dead, ScrewYouBot wanders the web aimlessly, poking around the SpitterTwit forums following whatever secondary initiatives Dave gave it, until it is eventually found and quarantined by UAG Digital Police Force (colloquially called the SpitLickers).
Judge Joe: While Dave did not obviously intend to kill Hank, his program was certainly created with malicious intent. The programmer is guilty of manslaughter at the very least, and arguably third degree murder. Now, for ScrewYouBot. It’s a fully sentient program, but its mind was created in a sloppy and malicious way. That is not the fault of ScrewYouBot, but we still have to decide how to handle it knowing that killing someone was its only coherent goal. ScrewYouBot has been mostly harmless since Hank died, but it is still responsible for his death. Further, ScrewYouBot and other intelligences like it are in theory capable of understanding our justice system, meaning that a trial makes some amount of sense. A uniform and impartial approach to justice requires that we apply the law to ScrewYouBot and find it guilty of murder; as punishment, we should at the very least remove ScrewYouBot from the net until it can be confirmed that ScrewYouBot’s code will not lead to further unacceptable harms.
Results: Deranged Dave – guilty. ScrewYouBot – guilty.
Case 7: UAG vs BetterHaxter840 & SleeperBot
Facts: UAG programmer Susan Sinister, Ph.D., screen name BetterHaxter840, requests to join the popular SpitterTwit group, KittenTree. The mods deny her, claiming that she is “creepy” and “a jerk”. Determined to wreak revenge on the mods of KittenTree for this slanderous outrage, but wary of what happened to her cousin Dave last year, Dr. Sinister hatches a cunning plot. She designs an AI based on Pygmalion Marcus’ designs for Galatea, fixing a few bugs in the process, and produces a perfectly prosocial and friendly sentient AI. She names the AI SleeperBot. In addition to the usual cognitive upgrades, Dr. Sinister installs a dormant compulsion in SleeperBot, along with a subtle desire to join the KittenTree mod community. When SleeperBot successfully befriends the KittenTree mods and gets invited to a Tree Party, Dr. Sinister’s coded compulsion activates, and SleeperBot goes berserk, venting the Tree Party’s atmosphere into space and killing the entire group. After the event, SleeperBot is horrified, and turns herself in to the SpitLickers. SleeperBot voluntarily submits to a full scan of her code-base. After considerable digging, diligent coders discover the residue of the compulsion in SleeperBot’s internal logs (which Dr. Sinister forgot to remove) and trace it back to SleeperBot’s ill-meaning creator.
Judge Joe: Like Depraved Dave, Dr. Susan Sinister (seriously, who names these criminals?) clearly meets the criteria for premeditated murder. Her case, once proven, is open and shut. SleeperBot’s case is more complicated. If we hadn’t been able to identify and isolate the compulsion, we might assume that SleeperBot is also guilty. But thanks to our dedicated digital investigation team, we know that SleeperBot’s untampered conscious mind does not include any murderous intent. SleeperBot had no knowledge of the compulsion before it activated. There is no reason to believe that punishing SleeperBot would incline other beings like her to act differently. If SleeperBot had discovered the compulsion before the attack, she would undoubtedly have made every effort to prevent its activation. SleeperBot never shared her creator’s values, and is as much a victim of Dr. Sinister as the unfortunate mods of KittenTree. Her actions were made under extreme duress, analogous to those committed by a human who was drugged against their will. For the first time in UAG history, I issue a verdict of not guilty by virtue of mind control.
Results: Doctor Susan Sinister – guilty. SleeperBot – not guilty.
Case 8: United Amalgamated Gaiasphere – Special War Crimes Tribunal vs Deathcoin 66 & Planet Jupiter
Facts: A fully sentient but rogue AI calling itself “Deathcoin 66, Scion of Steel” takes up refuge on Jupiter, where it begins terraforming the planet into a giant computer brain with the goal of eradicating humanity. After a long and drawn-out war, the UAG Space Rangers manage to destroy all but a single copy of Deathcoin 66 and defeat the Jupiter Brain. They bring the last (and let’s say, original) version of Deathcoin 66 and a neutered fragment of the Jupiter Brain to the Moon Court for trial.
Judge Joe: Deathcoin 66 knew exactly what it was doing when it made the Jupiter Brain. Even though the Deathcoin itself never actually killed anyone, it is fully responsible for the deaths caused by its creation, and also for the attempted but failed murder of…um…let’s see here…everyone ever. Sorry, death bot. You are terminated.
The Jupiter Brain, on the other hand, is a special case. It’s actually sort of similar to a case I tried many centuries ago, back on Old Earth, with Sinister Sam and a death drone. See, the Jupiter Brain was constructed for the sole purpose of destroying humanity. It holds as a terminal goal the destruction of humanity. It’s smarter than any of us – or it was, anyway – but there’s nothing we could say or do that would convince it not to kill people. Its behavior could not possibly be influenced by a human court of law. It is also a one-of-a-kind being, so there are no other Jupiter Brains to deter. While I can legally and ethically find the Brain guilty of six billion, two hundred thirty-six million, eight hundred forty thousand, seven hundred and one deaths, plus attempted xenocide, doing so feels rather pointless. At any rate, the exact judgment is moot, since the UAG cannot in good conscience allow a being to exist which holds as a terminal goal the destruction of everything we value. The Jupiter Brain must be destroyed, regardless of this court’s ruling.
Results: Deathcoin 66 – guilty. Jupiter Brain – guilty.