Janelle Shane had been playing with recurrent neural networks—a type of machine-learning software—for more than a couple months when the computer told her to put horseradish in a chocolate cake.
The request didn’t come out of the blue. Inspired by Tom Brewe, another AI researcher, Shane had been asking her neural net to come up with recipes. She fed it thousands of cookbooks, then asked it to generate new, similar texts. The magic of neural nets is that, even though the computer does not “understand” what a recipe is in the same sensational ways that a human does, it can eventually approximate a recipe well enough to cough out a quasi-realistic one.
That’s what Shane’s did. It told her to combine butter, sugar, eggs, milk, baking power, cocoa, vanilla extract, and peanut butter—with 1 cup of horseradish. And then it told her to boil it in the oven. (The neural net never quite mastered verbs.)
More in this series
She laughed it off and tweeted about it. But after another AI researcher told her the recipe was actually delicious, she made it for herself and a small dinner party for friends.
“I opened the oven and my eyes just watered,” she told me. “It was horrible. I had never tasted such a horrible chocolate thing in my whole life.”
Shane, 33, is not a professional artificial-intelligence researcher. During the day, she works with laser beams for a small research company in Boulder, Colorado. But she plays around with artificial intelligence in her free time.
Which is where Stoomy Brown comes in.
On Thursday, Shane posted the results of another experiment that has since gone viral. She fed the same neural-network software about 7,700 Sherwin-Williams paint colors. These are the types of impossibly named hues that you see in Home Depot: Burlington green, Terra cotta, Rustic earth. What would happen if a robot tried to simulate them?
At first, it struggled:
Recurrent neural networks “learn” by repeatedly processing the data given to them. Instead of a typical computer program, which runs certain pre-set functions on a large data set, neural networks learn probabilistically what the set “looks” like. As they prepare this model, they spit out new approximations of the data set—data that wasn’t included in the original set, but which could be.
In the case of the type of program that Shane uses, it learns to model character-by-character: It figures out which character are is most likely for a certain spot, then it moves on to the next, and the next after that. Hence the above checkpoint, in which the net has learned that “a” and “e” are both common letters that often go together… but it hasn’t learned much else. (On the upside, Caae Brae does sound like a Beowulf character.)
By the third or fourth checkpoint, Shane’s network got better at modeling paint names:
It also started to spit out amazing coinages—“Rose Hork,” “Burf Pink”—and it had even figured out roughly what colors align with what names. “Navel Tan” is really tan. “Horble Gray” is a type of gray. “Hurky White” is white … and it’s even kind of hurky.
It did not have the same success at all times, though. Note that Ice Gray is a putrid yellow.
Ultimately, by the last checkpoint, Shane noted that:
- The neural network really likes brown, beige, and grey.
- The neural network has really really bad ideas for paint names.
It also has “Stanky Bean.” And “Bank Butt.”
“The neural net has no concept of color space, and no way to see human-color perception,” she says. Instead, it processed colors by their RGB values: the combination of red, green, and blue that come together in each hue. “It’s really seeing [colors] not as a number at a time, but as a digit at a time. I think that’s why the neural net had a lot of trouble getting the colors right, why it’s naming pinks when there aren’t any pinks, or gray when it’s not gray.”
For her, this experiment—and its viral popularity—has hinted at the strange, savant quality of neural nets. How do 7,700 paint colors, fed into a program and given little other guidance, result in “Burble Simp?” Shane isn’t sure either. “I play around with [neural nets] for pure entertainment purposes. I’m endlessly delighted by what it comes up with, both good and bad,” she says.
She’s also previously used neural nets to generate new death-metal band names (Inbumblious, Vomberdean, and Chaosrug are highlights) and the names of new Pokémon (Tortabool, Minma, and Strangy). Once, a death-metal forum got ahold of the band names and started arguing what genre they should be.
Also, her favorite auto-generated paint colors are Hurky White and Caring Tan. And it’s true those are lovely. But personally, I prefer Turdly.