What If Your Autonomous Car Keeps Routing You Past Krispy Kreme?

The future of marketing that can take you places. Literally.
Alexis Madrigal

On a future road trip, your robot car decides to take a new route, driving you past a Krispy Kreme Doughnut shop. A pop-up window opens on your car’s display and asks if you’d like to stop at the store. “Don’t mind if I do,” you think to yourself. You press “yes” on the touchscreen, and the autonomous car pulls up to the shop.

Wait, how did the car know that you might want an original glazed doughnut? Because it has data on your driving habits, and you’re a serial offender when it comes to impulsive snacking. Your car is also linked to your online accounts at home, and you had recently “liked” Krispy Kreme’s Facebook page and visited its website. 

Is this future scenario convenient—or creepy? It’s one thing if a car’s driver-drowsiness detection system (which exists today) sees that you’re nodding off and suggests coffee. But to make your automated car divert from its usual course because some advertiser paid it to do so, well, that sounds like a mini-carjacking.

Whatever you think of it, this future may be coming up on the road ahead. At the Consumer Electronics Show (CES) earlier this month in Las Vegas, automakers announced deals to deliver online services or in-car apps to web-enabled cars of tomorrow. And where there are free or cheap online services, there’s online advertising—that train is never late. 

We don’t know what that advertising might look like: It could literally steer your future car, or it could be more familiar, such as streaming ads across your windshield in auto-driving mode (maybe too distracting in manual-driving mode). But because ad revenue is still the dominant e-business model, it’s a safe bet that advertising will be coming to a future car near you. After all, Google’s acquisition of Nest—maker of “smart” thermostats and other appliances—last week appears to be its first step toward owning the Internet of Things. If the technology giant is leaping the firewall of your personal computer to the rest of your home, why not also your car? Apple co-founder Steve Jobs reportedly had hoped to bring an “iCar” to market, essentially a huge iPhone with wheels. 

Could advertisers really influence the route taken by a self-driving car? It seems plausible, and legal, in at least some circumstances. Say there are multiple routes to your destination. Some may be shorter in terms of distance but longer in terms of travel time, or some routes are equidistant. In those cases, there’s no obviously “right” route to take, but advertiser money could be a “plus factor” that’s just enough to tip driving algorithms in their direction. 

This practice doesn’t seem to be a big inconvenience for the car’s passengers, as long as the detour doesn’t add much extra time or distance to their trip. Some taxi drivers and hotel concierges are known to accept kickbacks from restaurants, casinos, strip clubs, and other establishments to steer business toward them. So this already happens today. But even if not illegal, it raises ethical questions and the need for transparency in a world run by algorithms most of us don’t understand.

 

More Ethical Potholes

Privacy is already a chief worry about in-car apps and robotics more generally, which some predict to be the next battleground for civil liberties. The doughnut scenario above speaks to that fear. Distracted driving could be made worse with in-car apps, as this hilarious video predicts. But there are other, less obvious problems to think about too:

A couple of weeks ago, a Massachusetts man was arrested when allegedly his Google+ account automatically emailed invitations to everyone in his address book, including his ex-girlfriend who had a restraining order against him, without his knowledge. Something similar could happen with robot cars, such as driving a registered sex offender right by a school when he isn’t supposed to be within 2,000 feet of them. Who would be to blame: the human behind the wheel, or the self-driving car?

As one automotive vice-president unwisely pointed out at CES, “We know everyone who breaks the law, we know when you’re doing it. We have GPS in your car, so we know what you're doing. By the way, we don’t supply that data to anyone.” This raises the issue of whether capability implies responsibility: Are you morally obligated to act on information that could prevent serious harm to someone?  For instance, if an intelligence agency collects data that strongly suggest an impending terrorist attack, it seems wrong not to warn the public or try stopping the attack.

Presented by

Patrick Lin is the director of the Ethics + Emerging Sciences Group at California Polytechnic State University, San Luis Obispo; a visiting associate professor at Stanford's School of Engineering; and an affiliate scholar at Stanford Law School. He is the lead editor of Robot Ethics and the co-author of What Is Nanotechnology and Why Does It Matter? and Enhanced Warfighters: Risk, Ethics, and Policy.

The Blacksmith: A Short Film About Art Forged From Metal

"I'm exploiting the maximum of what you can ask a piece of metal to do."

Join the Discussion

After you comment, click Post. If you’re not already logged in you will be asked to log in or register.

blog comments powered by Disqus

Video

Riding Unicycles in a Cave

"If you fall down and break your leg, there's no way out."

Video

Carrot: A Pitch-Perfect Satire of Tech

"It's not just a vegetable. It's what a vegetable should be."

Video

An Ingenious 360-Degree Time-Lapse

Watch the world become a cartoonishly small playground

Video

The Benefits of Living Alone on a Mountain

"You really have to love solitary time by yourself."

Video

The Rise of the Cat Tattoo

How a Brooklyn tattoo artist popularized the "cattoo"

More in Technology

Just In