What If Your Autonomous Car Keeps Routing You Past Krispy Kreme?

As this applies to automated cars and certain people, it could be the duty of manufacturers to not only figure out where a car’s driver should go, but also where he or she should not go. In some distant future, if the locations of most people can be pinpointed through GPS and other methods, a robot car could tell when a driver is about to violate a restraining order and then refuse to travel there. If they have the data to connect the dots, they probably should do it when it matters.

And it doesn’t just matter for legal reasons, but other factors could be important to users of future wired cars. An owner of a shiny new robot car probably wouldn’t appreciate being deliberately driven—because of advertisers—past fast-food restaurants if she’s on a diet, or by a cluster of bars if she’s a recovering alcoholic, or toward maternity stores if she hasn’t publicly revealed her pregnancy.    

It could be that drivers and passengers can instruct cars to avoid certain destinations. Putting aside the question of why we should be imposed upon like this at all, if the car were to drive to those verboten destinations anyway, that’s probably wrong. Recall in Isaac Asimov’s novels that the second law of robotics is to always obey human orders (where they don’t violate the first rule to not cause or allow harm to humans). 

However, resisting humans is a major point of autonomous cars: We humans are often error-prone and reckless, while algorithms and unblinking sensors can physically drive better than us in most if not all cases. An automated vehicle is designed precisely to disregard our orders where they are imminently risky. That’s to say, refusing human orders is sometimes a feature, not a bug. It’s unclear, then, whether opting-out of certain destinations (or opting-in) is reason enough for cars to comply with those commands.

 

* * *

 

The app itself is becoming the new killer app. The latest Windows 8 machines mimic the app dashboards on Apple OS and Android mobile phones. And we can expect online applications to be part of future cars, robotic or not. As existing apps on our mobile phones and computers are already doing now, in-car apps will raise a host of legal and ethical dilemmas, from privacy and beyond.

The problem I discussed at the beginning was related to advertising, but advertising itself isn’t the problem. At their best, advertising could be helpful video clips or images that educate you about products and solutions you truly might be interested in. At their worst, they’re annoyances that interrupt your concentration while you’re absorbed in an essay, video, podcast, or video game.  Ads can push you to vote one way, or buy this thing you don’t need.  They could make you into a worse person—or a better person. 

So while advertising gets a lot of criticism, ads seem to be a necessary evil if the consumer wants to pay as little as possible. That’s neither here nor there in our discussion now, but the decision to allow a car to be controlled by third-parties—directing the route for an advertiser’s interests and not the car owner’s—is the real problem.  Advertising inside a wired car is not just about showing you tantalizing stuff, but it could be about driving you physically to that stuff.  This paradigm shift would make ads even more invasive than critics today might imagine.

More seriously, manufacturers will also need to make hard life-and-death choices in programming autonomous cars, and these decisions should be considered thoughtfully, openly, to ensure a responsible product that millions will buy, ride in, and possibly be injured with. That’s all the more reason to focus on ethicsnot just on law, as we’re doing at the Center for Automotive Research at Stanford (CARS)—in steering the future of transportation in the right direction.

 

Jump to comments
Presented by

Patrick Lin is the director of the Ethics + Emerging Sciences Group at California Polytechnic State University, San Luis Obispo; a visiting associate professor at Stanford's School of Engineering; and an affiliate scholar at Stanford Law School. He is the lead editor of Robot Ethics and the co-author of What Is Nanotechnology and Why Does It Matter? and Enhanced Warfighters: Risk, Ethics, and Policy.

Get Today's Top Stories in Your Inbox (preview)

CrossFit Versus Yoga: Choose a Side

How a workout becomes a social identity


Join the Discussion

After you comment, click Post. If you’re not already logged in you will be asked to log in or register. blog comments powered by Disqus

Video

CrossFit Versus Yoga: Choose a Side

How a workout becomes a social identity

Video

Is Technology Making Us Better Storytellers?

The minds behind House of Cards and The Moth weigh in.

Video

A Short Film That Skewers Hollywood

A studio executive concocts an animated blockbuster. Who cares about the story?

Video

In Online Dating, Everyone's a Little Bit Racist

The co-founder of OKCupid shares findings from his analysis of millions of users' data.

Video

What Is a Sandwich?

We're overthinking sandwiches, so you don't have to.

Video

Let's Talk About Not Smoking

Why does smoking maintain its allure? James Hamblin seeks the wisdom of a cool person.

Writers

Up
Down

More in Technology

Just In