It turns out that the basics of getting from one place to another, under ideal conditions, are not that difficult. Some hobbyist drones can fly through a set of waypoints on their own. Others can follow a signal down on the ground. But these capabilities are more in the realm of autopilot than autonomy: They simply hold a bearing, altitude, and speed. It’s kind of like cruise control in the sky. It’s a pretty huge leap from cruise control to self-driving cars and the same is true of the jump from autopilot to self-flying vehicles.
But what is hard is dealing with the thousands of unexpected scenarios and “edge cases” that would inevitably crop up if these systems were deployed at scale. It’s the sum of how the vehicles handle all those difficult situations that add up to a reliable technology.
The analogy to Google’s self-driving car efforts is clear here: It’s not that hard to build software that can drive a car on the freeway or even around around Mountain View and deal with 99 percent of the things that happen.
But what about that one percent?
Finding and learning how to deal with all the possible edge cases, and coming up with safety procedures for what to do when the robot doesn’t know what to do is actually what forms the core of these big, long-term development programs.
In self-driving cars, Google keeps a massive database of all the times when a human operator had to take control of a car. They can simulate what would have happened if the human had not tagged in, and try out different software approaches to teaching the system how to react, if, in fact, it would have made an error. Any time they change the system’s logic, Google tests the alterations against that whole database to make sure they haven’t broken something with the new fix.
Project Wing will probably adopt the same approach with both the database and the human operators. But instead of a single driver operating a single car, as has been the case in the autonomous vehicle program, Teller likes to imagine that there will be a relatively small number of operators controlling a number of drones, helping them make the right decisions in difficult situations.
“If a self-flying vehicle is trying to lower something and it goes down three feet and gets stuck, should it go home? Should it land? There’s not a right answer to that,” Teller told me. “That would be a good moment for it to raise its hand and say back to someone looking at the delivery control software, ‘What should I do?’”
This is a Google-y approach to the problem of ultra-reliability. Many of Google’s famously computation driven projects—like the creation of Google Maps—employed literally thousands of people to supervise and correct the automatic systems. It is one of Google’s open secrets that they deploy human intelligence as a catalyst. Instead of programming in that last little bit of reliability, the final 1 or 0.1 or 0.01 percent, they can deploy a bit of cheap human brainpower. And over time, the humans work themselves out of jobs by teaching the machines how to act. “When the human says, ‘Here’s the right thing to do,’ that becomes something we can bake into the system and that will happen slightly less often in the future,” Teller said.
One area where humans might be less helpful is the development of detect-and-avoid software that could help the drones deal with birds, other UAVs, helicopters, and the like. Some—some—of these issues could be solved by regulation that creates certain corridors or layers of air space for drones, as well as requiring transponders or other signaling mechanisms on all humanmade flying things. But that’s not a complete solution because as Teller put it, the birds aren’t going to wear instruments.
Roy says the project is still in the very early days of developing a mature, reliable detect-and-avoid system. But they are very far from having the right answers.
Think about what the problem really looks like: A camera or radar or laser is pointed at the sky in the direction that the vehicle is flying. The background could be either the sky or earthly terrain with all the variation that could imply. So the environment itself is pretty noisy. And the only signal that the drone was on a collision path with a distant object would be a few pixels in the image from, say, a camera. Working from that limited data, the software has to interpret those pixels as a type of flying thing and predict what it might do. And it has to do all that consistently under radically different lighting and visibility conditions.
Predicting others’ flight paths requires that one’s algorithm make some tradeoffs. At one end of the spectrum, one could program the software to say that other flying things could do anything at any time. But that makes it incredibly difficult to fly in normal airspace and is overly conservative. On the other end of the spectrum, one could assign fixed and rigid paths to all other flying things, assuming they move more or less in straight lines along a trajectory. But that, too, could lead to problems if a plane turns or a bird dives or a quadcopter reverses direction.