Google has a new drawing tool that’s extremely cool and also a little bit depressing.
The program, AutoDraw, uses artificial intelligence to figure out what you’re trying to sketch—from your phone or desktop—so an algorithm can suggest a polished, pre-drawn version of that thing instead.
To build this, Google trained a neural network to recognize a ton of different objects. Such systems are trained on huge datasets so they can recognize patterns. Basically, the machine lumps together a bunch of similar-looking things and says: Based on what I have seen before, I know that these slightly different representations are actually all just this one thing. Meaning, the better the machine gets at recognizing people’s clumsy doodles, the more inclined it is toward visual sameness.
Which is what people do, too, of course, from the time we are very small humans cataloguing the different animals and things in our world. A neural network, like a human toddler, might confuse a giraffe and a zebra for a while, but eventually—with enough data to support their understanding of the differences between the two—they figure it out.
The thing about AutoDraw is that it isn’t just saying, Oh, I see you’ve drawn a zebra, but also suggesting, And here’s what that should look like, actually.
This can be quite helpful if you’re just a person trying to find some quick clip art. But it’s also a way of erasing the lovely and nuanced evidence of how differently people see and interpret the world around them.
AutoDraw is based on an earlier Google experiment, Quick Draw, which turned the training of its neural network into a simple and pleasing sketching game. Quick Draw is a little like playing Pictionary with a computer as your teammate. Each round of the game involves six sketching prompts, with users getting 20 seconds to draw the assigned subject matter—ant! calculator! bread! lobster! hospital! spider!—to get the computer to figure out what they’ve drawn.
I love this game. And my favorite part is that you can see what other people produce from the same prompts. You’ll notice right away how often people rely on visual conventions—understandable, especially given the time limit—and how that sameness defines the way a machine sees the world.
I’m fascinated, for example, by the fact that so many people drew old-fashioned rotary-looking handsets for “telephone.”
And I was surprised that so many people who drew “jail” opted not to depict a person inside. One fun approach to Quick Draw is to push right up to the edge of what you think the machine’s perception will be—to try to draw something different from everybody else’s interpretations, but still identifiable to the machine.
These outliers, in my view, are the best part of the training exercise. The drawing prompt for “saw,” for instance, generated mostly zig-zaggy looking hand saws. No surprise there. But then you’d see that someone drew this pretty impressive chainsaw:
And another person came up with this magnificent question-mark-esque circular saw (at least, I think that’s what it is) that somehow talked its way into the regular-ole-handsaw party:
I mean look at this awesome thing!
It’s amazing to me that the neural net looked at the sketch above and knew to lump it in with the rest of the saws. Not because it isn’t a fine representation of a saw, but because it shows just how attuned to differences in perception a neural net can be trained to be. (It’s possible the machine didn’t recognize this as a saw without having been told, but still.) The typical human inclination, when asked to sketch a saw that someone else can recognize, is to reach for the most stereotypical representation—but the machine can eventually learn to recognize far more than that.
That’s what makes AutoDraw, the newer drawing tool, feel like more of a creative shortcut than an expressive outlet. It collapses stylistic differences back into recognizable cliches.
That’s what it’s supposed to do—“fast drawing for everyone,” Google says. It’s a nice program. But it’s also one that gives doodles a mass-produced feel that is decidedly undoodly. “A lot of these [programs] that are coming up, they’re really quite shallow,” said Alexander Rudnicky, a computer-science professor at Carnegie Mellon, when I described AutoDraw to him. “There’s no real intelligence. It’s very sophisticated pattern-matching, and I think it’s really cool, but it’s not the same as what we like to think of as intelligence—the ability to create new structures from scratch.”
Playing around with AutoDraw this morning, I sketched a quick elephant. The algorithm didn’t see it. Perhaps, Google suggested, I was attempting to doodle a koala? Or a frog? Or a hot dog? Look, I’m not saying it’s perfect, but you can tell this is supposed to be an elephant, right?
The thing is: I knew what I had to do to get Google’s algorithm to see “elephant,” so I sketched that instead. A boring side profile:
And it worked. Google saw an elephant, and suggested swapping out my creation for its own. The suggested image from Google had even less personality than my toned-down elephant.
Which, hey, maybe that’s the platonic elephant. And maybe it’s the most useful elephant to the greatest number of people—for their presentations or invitations, or whatever else AutoDraw might be used for. Simplicity’s not a bad thing. And a lot of the pre-selected images are quite charming and detailed. But I prefer a world where doodles of elephants look different—and in some cases very different—from one another. (Google does let people submit original drawings, for what it’s worth.)
Google is a company that built its fortune on giving curious people a new way to search for answers. Designing AutoDraw must have felt like an extension of the company’s original cultural values: exploration, learning, play. The user-facing product doesn’t feel quite so whimsical. In the incongruity of the visual suggestions it sometimes offers—is that an elephant, or a hot dog?—there’s still a hint at the thing it squelches: the joy of different ways of seeing, the pleasure and surprise of imagining how something might be different than what you were expecting.
We want to hear what you think about this article. Submit a letter to the editor or write to firstname.lastname@example.org.