If the Robots Kill Us, It's Because It's Their Job

Don't kill us, guys.

In the movie Transcendence, which opens in theaters on Friday, a sentient computer program embarks on a relentless quest for power, nearly destroying humanity in the process.

The film is science fiction but a computer scientist and entrepreneur Steven Omohundro says that “anti-social” artificial intelligence in the future is not only possible, but probable, unless we start designing AI systems very differently today.

Omohundro’s most recent recent paper, published in the Journal of Experimental& Theoretical Artificial Intelligence, lays out the case.

We think of artificial intelligence programs as somewhat humanlike. In fact, computer systems perceive the world through a narrow lens, the job they were designed to perform.

Microsoft Excel understands the world in terms of numbers entered into cells and rows; autonomous drone pilot systems perceive reality as a bunch calculations and actions that must be performed for the machine to stay in the air and to keep on target. Computer programs think of every decision in terms of how the outcome will help them do more of whatever they are supposed to do. It’s a cost vs. benefit calculation that happens all the time. Economists call it a utility function, but Omohundro says it’s not that different from the sort of math problem going in the human brain whenever we think about how to get more of what we want at the least amount of cost and risk.

For the most part, we want machines to operate exactly this way.  The problem, by Omohundro’s logic, is that we can’t appreciate the obsessive devotion of a computer program to the thing it’s programed to do. 

Put simply, robots are utility function junkies. 

Even the smallest input that indicates that they’re performing their primary function better, faster, and at greater scale is enough to prompt them to keep doing more of that regardless of virtually every other consideration. That’s fine when you are talking about a simple program like Excel but becomes a problem when AI entities capable of rudimentary logic take over weapons, utilities or other dangerous or valuable assets.

In such situations, better performance will bring more resources and power to fulfill that primary function more fully, faster, and at greater scale. More importantly, these systems don’t worry about costs in terms of relationships, discomfort to others, etc., unless those costs present clear barriers to more primary function. This sort of computer behavior is anti-social, not fully logical, but not entirely illogical either.

Omohundro calls this approximate rationality and argues that it’s a faulty notion of design at the core of much contemporaryAI development.

Presented by

Patrick Tucker is the technology editor of Defense One and the author of the book, The Naked Future: What Happens In a World That Anticipates Your Every Move.

Google Street View, Transformed Into a Tiny Planet

A 360-degree tour of our world, made entirely from Google's panoramas

Join the Discussion

After you comment, click Post. If you’re not already logged in you will be asked to log in or register.

blog comments powered by Disqus

Video

Google Street View, Transformed Into a Tiny Planet

A 360-degree tour of our world, made entirely from Google's panoramas

Video

The 86-Year-Old Farmer Who Won't Quit

A filmmaker returns to his hometown to profile the patriarch of a family farm

Video

Riding Unicycles in a Cave

"If you fall down and break your leg, there's no way out."

Video

Carrot: A Pitch-Perfect Satire of Tech

"It's not just a vegetable. It's what a vegetable should be."

Video

The Benefits of Living Alone on a Mountain

"You really have to love solitary time by yourself."

More in Technology

Just In