Chris Urmson led Google’s self-driving car team from its early days all the way until the company shed its Google skin and emerged under the Alphabet umbrella as Waymo, the obvious leader in driverless cars.

But though Urmson pushed the organization far enough up the technological mountain to see the possibility that Waymo would be the first to commercially deploy automated vehicles, he did not make it to the promised land.

Instead, after current Waymo CEO John Krafcik took control of the enterprise, Urmson left in December of 2016. After a few months pondering his next move, he cofounded Aurora, a new self-driving car start-up, with Sterling Anderson, who’d launched Autopilot at Tesla, and Drew Bagnell, a machine-learning expert who’d been at Uber.

When the company came out of stealth in early 2017, it was greeted with something like awe. This was a fearsome new team in the self-driving car space. Last month, they raised $90 million in their first venture round from Greylock Partners and Index Ventures. LinkedIn founder (and Greylock partner) Reid Hoffman also joined their board.

Recently, I sat down with Urmson to talk about his new company, the state of the self-driving car business, Ford Escorts, and why the cost of all those sensors on self-driving cars doesn’t really matter.

What was most surprising is that Urmson’s vision for implementing self-driving car technology has not really changed, but his view of the industry has. Waymo plans to own and operate its own service, contracting with car companies to manufacture the vehicles for its fleet. Aurora plans to partner with the big car companies to provide the tech for them to build their own legions of driverless vehicles. Why build your own car service, when you could sell your technology to all of them?

Aurora, then, is a bet that scaling up self-driving car technology will be as complex, fraught, and expensive as developing it in the first place, so the first company to deploy them (i.e. Waymo) won’t necessarily be the long-term market winner. And if there’s anyone who should be able to set the odds on that wager, it’s Urmson.


Alexis Madrigal: When you left Google, I was stunned because I thought you were going to go all the way with it.

Chris Urmson: Yeah. It was time. I think if you’d asked me a year before I left, I would have said I’d be there indefinitely. But it was time.

Madrigal: Why?

Urmson: At the end of the day, it wasn’t as much fun anymore. Like most people, I do my best work when I’m having a good time. The team deserved better than what they were getting out of me. They brought John [Krafcik] in and he’s who they picked to run that and build the company around. And that’s great for them. It just wasn’t a good fit for me.

Madrigal: Was at least part of it that you had the most valuable experience of basically anybody in the technology industry?

Urmson: No. When I left, I didn’t know what I was going to do. There was some appeal to go lie on a beach for a bit. There is a lot of cool stuff, whether it is flying cars, or the melding what’s of computer science and biotechnology. And I’d spent the last 12 years of my life or so working on self-driving cars, so it felt like maybe it was time to go check out what else was out there. I spent three to four months going through that process. I met an immense number of people. And after a few months, I realized that there was a unique opportunity to build a new company.

I was able to find a few great people to found it with. Sterling had been at Tesla for a while. He’s got a great pedigree. MIT Ph.D., spent a couple of years at McKinsey, and then was at Tesla and actually launched stuff. It’s not quite the same as a self-driving car but he launched a car with the Model X, then launched Autopilot.

Drew, I’ve known since 1999, going on 19 years now. We went to graduate school together. He’s one of these people in machine learning who has been applying it to robotics since before it was cool. We’d never really worked together but we had a lot of common friends. He was at Uber and frankly was ready for a change.

Madrigal: Is there anything to be said about the melding of the different approaches to self-driving cars by Google, Tesla, and Uber?

Urmson: That actually is part of what we see as the secret sauce. We get to bring these different experience bases and try to pick the best from each. Can Tesla ship stuff? It’s real hardware and we’ve been able to build a great team of people who have some of that experience and understand what it takes to qualify something and do that in a modern approach to an automotive system.

And Drew obviously spent some time at Uber, but I really think of him as a machine-learning guru, particularly robotics and machine learning. Because when you’re doing machine learning about the web, there’s an infinite amount of data there.

Madrigal: So why found a new company after you’d spent so long building up Google/Waymo’s head start?

Urmson: It was the right moment in time because of where the technology is and where the automotive ecosystem is. A few years ago, I don’t think the automotive community was ready to do something different. They’ve evolved over 100 years. They have this supply chain that is incredibly capable. And that mostly had worked.

But this is a new technology. It requires a new set of people, a new set of skills. And it seems like the industry is ready, partly because of the immense pressure they are under: The combination of environmental impact and the need to go to electrification, the impact of connectivity—not just Onstar—but also, ride-hailing: Uber, Lyft, and Didi. That’s really information connectivity enabling the car to be used in a new way. And then there are automated vehicles and driver-assistance technologies. These three technologies were converging on the space at the same time. And really driving the industry to think about, “How do we deal with this?”

That felt like there was an opening for a new company, Aurora, to bring this deep experience we have, this understanding of the problem, start with a clean sheet, and then go out and work with these companies in the spirit of not disrupting them, but working with them. Because it turns out that it’s really, really hard to make a car. We all take it for granted, but they are actually kind of miraculous.

Madrigal: So that would be the key strategic difference from what you are doing and what Google/Waymo are doing. Are there technical differences in the approach you are taking?

Urmson: I think if you look at it from 10,000 feet, then no. We’re using lasers and radars and cameras and we’re doing motion planning, perception, control. We have software infrastructure. So, at that level, no. It doesn’t matter who you are. You are using these sensors and software. But what really matters in the space is being able to get to the last 10 percent, the last 1 percent, the last 0.001 percent.

Madrigal: The Zeno’s paradox of the self-driving car problem.

Urmson: Right. You go get a couple of graduate students together, you get a car, you download ROS, and you can probably get a self-driving car driving around a parking lot within six months.

The challenge, of course, is in the details, and that’s where at Aurora, what we’re thinking about is, if we’re going to engineer this from a clean sheet of paper today, let’s not focus on demoware and having a car driving around a parking lot as quickly as possible. Let’s understand how to get to something that is efficient and safe to be out on the road.

Madrigal: What would be an example of the difference in your approach?

Urmson: An example would be the way we are applying machine learning in the motion planning and the perception system in a combined way.

The classic way you engineer a system like this is that you have a team working on perception. They go out and make it as good as they can and they get to a plateau and hand it off to the motion-planning people. And they write the thing that figures out where to stop or how to change a lane and it deals with all the noise that’s in the perception system because it’s not seeing the world perfectly. It has errors. Maybe it thinks it’s moving a little faster or slower than it is. Maybe every once in a while it generates a false positive. The motion-planning system has to respond to that. So the motion-planning people are lagging behind the perception people, but they get it all dialed in and it’s working well enough—as well as it can with that level of perception—and then the perception people say, “Oh, but we’ve got a new push [of code].” Then the motion-planning people are behind the eight ball again, and their system is breaking when it shouldn’t. You end up with this challenging leapfrog problem. You end up blocking one team or the other. The motion-planning people don’t want to change perception because they just got it working. But you’re not gonna get there if perception doesn’t improve.

The way we’re engineering the system is one where we’re applying machine learning in both places. We’ll be able to take the output of whatever the most recent spin of perception is, automatically retune the motion-planning system to that new perception and be able to move them forward more rapidly together.

It sounds almost obvious, but the art is in what are the interfaces between the two so that you can allow the algorithms to cooperate properly.

Madrigal: There has been this massive change in the ubiquity of machine learning since Google’s self-driving car efforts started. So does Waymo really take full advantage of that? I put this question to Dmitri Dolgov, Waymo’s VP of engineering, when I was working on a story about them last year. And his answer was basically—because Google is an undisputed leader in machine learning—“Who do you think would have been on this first in the whole world?”

Urmson: Way to go, Dmitri! That’s not a bad answer.

Madrigal: But it still seems to me, to your point about starting from scratch, that they’d probably do things differently starting now than starting in 2009.

Urmson: You would. And this is in no way a knock against Dmitri. That is an amazing team and they are doing great work. And they are clearly way out ahead of everybody. But you would. When you have a certain set of architectural precepts that are in there that everything is structured around. You are smart and you iterate and you change, but there are bones that are there. We get to say, actually, let’s now, knowing everything we do about where machine learning is, and the availability of cloud computation and understanding how hard the problem really is, let’s set up to be able to tackle those problems from day one.

It doesn’t mean that we’re going to get there before Waymo does, but what it does mean is that we’ll be able to cover the ground more quickly. And that we’ll be able to help our partners bring something safe and ultimately much more robust to market.

Madrigal: As you’ve worked anew through this set of problems, has there been anything that was really hard back in the day that now you’re just like: Shit, we did that in a month?!

Urmson: Some of the things where we’ve been applying machine learning to, object tracking, for example. Very quickly, we’ve been able to get versions of that up and running. That’s exciting. And that’s a function of the ecosystem and the world we live in. TensorFlow wasn’t a thing when we started at Google.

Madrigal: Another “Waymo-way” precept was deciding not to treat self-driving technology as a form of driver assistance. I personally have heard you make the case against the hand off from car to driver.

Urmson: Still believe it! That’s not to say you can’t have a steering wheel in a vehicle and you can’t have the vehicles drive when they want to. But the distinction that I would make is that: The car should never require the person in the driver’s seat to drive. That hand back is the hard part.

If you want to drive and enjoy driving, God bless you, go have fun, do it. But if you don’t want to drive, it’s not okay for the car to say, “I really need you in this moment to do that.”

People talk about what are called level-three systems. This idea that it’ll drive and then it’ll give you notice that you should come back. It turns out that if you don’t respond to the notice, it still has to do the right thing, so at that point, it’s effectively a very limited level-four system. And to do that, the complexity of implementing it is high enough that the sensor suite is gonna get pretty expensive.

Madrigal: How are you benchmarking your progress here?

Urmson: Right now, we’re really about building it right. Our partners would like to see a 2020 or 2021 kind of time frame. So, we’re moving as quickly as we can to support that. At that time frame, we’re talking tens of thousands of vehicles, which is huge compared to the thousandish-maybe vehicles that are around today. But that will just be the beginning of the deployment, when we think about impact in the world. That was part of the thinking with Aurora was it’s gonna take so many years to get the technology to work, and it takes a similar kind of number of years to build the cars that the technology is going to come into. So, if we can find partners that will develop the two in parallel, then we can go out there and have the scale impact that we want more quickly than others will be able to.

Even if Waymo has the technology—imagine that it was just done today—they still need time to get it to scale and they need deep automotive partnerships to make that happen.

Madrigal: The automotive companies have tended to say, “It’s nice that Google can put $100,000 worth of sensors on a car, but we’re talking about delivering automotive technology to the masses.” Is that the key problem as the makers of the suite of technology?

Urmson: I don’t buy that argument two ways. In a ride-hailing or a public-transit business model, the cost of the equipment on the car doesn’t matter. If it is $10,000 or $20,000 or $50,000, it’ll work out. The economics will work. And at the same time, there is this false equivalence where the cost of $100,000 of equipment on a car today is being equated to the cost of equipment on a fielded, scale-deployed vehicle. If you go and look at the prototype builds of any production car. Go pick the least expensive car that you might buy. I don’t know what that is in the market today.

Madrigal: Ford Escort.

Urmson: Don’t knock Ford Escorts! That was my first car.

Madrigal: Mine too!

Urmson: A little blue Ford Escort station wagon.

Madrigal: I had the ZX2.

Urmson: You had the sporty one.

Madrigal: That’s actually dorkier, though.

Urmson: You might be right. But when they built the first 50 of those, those are probably between $250,000 and $500,000 a piece. Going through the manufacturing process, the design for manufacturability, the supply-chain management process, it crushes down to $12,000 a car or whatever it is.

The same will happen on the other elements. If you think about radars, for example. Automotive radar today, you could probably buy for $50, if you were an OEM buying a million of them for a model run. When they were making the first of those, I guarantee you they were $20,000 to $50,000 apiece.

When people talk about this, you have to look at what we would call “should-cost” pricing. Take a laser off the shelf and look at the parts that are in it. Sometimes if it is a fiber-optic laser, getting the yttrium-doped fiber is expensive. But if you look at a laser-diode LIDAR system, there’s nothing in there that should cost anything. Laser diodes are pennies to dimes apiece. The APDs are pennies to dollars apiece.

And those are small volumes and each of those will drop an order of magnitude if you order an order of magnitude more of them. The cost of these things will collapse when there is actually volume behind them.

Madrigal: What are people not thinking about with self-driving cars that they should be?

Urmson: The hard question is how this technology will be ultimately used and what is the deeper impact. The smartphone has been around or about a decade. If you look at the implications for this 10 years ago, I don’t think we would have called it. I don’t think anybody saw Uber or Lyft.

Madrigal: I have a little spreadsheet with different technological predictions, like one for VR and one for lab-grown meat and I have one for self-driving cars. And I track how they play out over time. The weird thing about self-driving cars is that I went through the predictions all again about six months ago and events have pretty much kept pace with your predictions over the last seven to eight years.

Urmson: Good for us.

Madrigal: Of course, there are a whole raft of new predictions that have come out in the last year and we’ll have to see how it goes, especially because they are coming from a much broader set of players than were making those initial predictions. The earlier players were all coming primarily out of your “coaching tree” so to speak from the DARPA-challenge days. All those people had a shared sense of where the technology was, how it was gonna work, and the areas that needed development.

Urmson: I do think that’s one of the interesting things happening. Over the last year and a half, two years, there is a more diverse set of people involved, good and bad. Because you’re right, a lot of the early work grew out of the DARPA challenges. That was the only place you could get people who had experience.

Now, there’s money. So people say: That looks interesting, let me go play with it. That breeds new innovation and I think that’s cool.

Madrigal: So those were things I was wondering. What are you thinking about?

Urmson: I think about how you build a healthy company and how do we do that in a world where this is an incredibly competitive recruiting market, do it with a better understanding of social inequality, do it as we face questions around increasing automation. How do we not lose the benefits—the million-plus people lost every year on the roads, time savings, the access—how do we get that part without an incredible social disruption? That’s one of the things I’m worried about. I have not gotten far enough to have a good answer yet, but it’s certainly on my mind.