Imagine an up-and-coming despot who would like to eliminate opposition, armed with a database of citizens’ political allegiances, addresses and photos. Yesterday’s despot would have needed an army of soldiers to accomplish this task, and those soldiers could be fooled, bribed, or made to lose their cool and shoot the wrong people.
The despots of tomorrow will just buy a few thousand automated gun drones. Thanks to Moore’s Law, which describes the exponential increase in computing power per dollar since the invention of the transistor, the price of a drone with reasonable AI will one day become as accessible as an AK-47. Three or four sympathetic software engineers can reprogram the drones to patrol near the dissidents’ houses and workplaces and shoot them on sight. The drones would make fewer mistakes, they wouldn’t be swayed by bribes or sob stories, and above all, they’d work much more efficiently than human soldiers, allowing the ambitious despot to mop up the detractors before the international community can marshall a response.
Because of the massive increase in efficiency brought about by automation, AI weapons will lower the barrier to entry for deranged individuals looking to perpetrate such atrocities. What was once the sole domain of dictators in control of an entire army will be brought within reach of moderately wealthy individuals.
Manufacturers and governments interested in developing such weapons may claim that they can engineer proper safeguards to ensure that they cannot be reprogrammed or hacked. Such claims should be greeted with skepticism. Electronic voting machines, ATMs, blu-ray disc players, and even cars speeding down the highway have all been recently compromised in spite of their advertised security. History demonstrates that a computing device tends to eventually yield to a motivated hacker’s attempts to repurpose it. AI weapons are unlikely to be an exception.
* * *
International treaties going back to 1925 have banned the use of chemical and biological weapons in warfare. The use of hollow-point bullets was banned even earlier, in 1899. The reasoning is that such weapons create extreme and unnecessary suffering. They are especially prone to civilian casualties, such as when people inhale poison gas, or when doctors are injured in attempting to remove a hollow-point bullet. All of these weapons are prone to generate indiscriminate suffering and death, and so they are banned.
Is there a class of AI machines that is equally worthy of a ban? The answer, unequivocally, is yes. If an AI machine can be cheaply and easily converted into an effective and indiscriminate mass killing device, then there should be an international convention against it. Such machines are not unlike radioactive metals. They can be used for reasonable purposes. But we must carefully control them because they can be easily converted into devastating weapons. The difference is that repurposing an AI machine for destructive purposes will be far easier than repurposing a nuclear reactor.