Babies are such little copycats. They learn by watching other humans. This is how small, dependent people turn into larger, independent people.

The thing is, babies aren’t just imitators. By the time they’re around 18 months old, they’re pretty adept at figuring out what a person is trying to do—say, stack some blocks, or toss a ball into a toy bin—even if that person doesn’t succeed. In other words, they can infer intent, and even develop their own alternate strategies of achieving a goal.

“Humans are the most imitative creature on the planet and young kids seamlessly intertwine imitation and innovation,” said Andrew Meltzoff, a psychology professor at the University of Washington and a co-director of the school’s Institute for Learning & Brain Sciences. “They pick up essential skills, mannerisms, customs, and ways of being from watching others, and then combine these building blocks in novel ways to invent new solutions.”

Meltzoff recently worked with a team of roboticists and machine-learning experts to explore a strange and compelling question: What if robots could learn this way, too? A paper detailing their findings was published in the journal PLOS ONE last month.

“The secret sauce of babies is that they are born immature with a great gift to learn flexibly from observation and imitation. They see another person and register that the person is ‘Like Me.’ They devote great attention to the ‘Like Me’ entities in the world,” Meltzoff told me. “Roboticists have a lot to learn from babies.”

The researchers developed algorithms that would allow a robot to consider how its actions might result in different outcomes. The robots relied on a probabilistic model to infer what a human wanted it to accomplish. (The team also programmed the robots to ask for help when they were unsure.)

To test what they built, the researchers arranged two experiments. In one, a robot would learn to follow a human's gaze; in the other, the robot would learn to imitate a human moving fake food around a tabletop.

In the first experiment, the robot would learn how its own head moved, and assume that the human’s head was governed by the same rules. It would then observe the movement of the human’s head, including the direction that person was looking and what the person fixated on, and mimic those movements. In the second experiment, the robot experimented with moving food-shaped toys around on a table. Not only did the robot mimic the human—pushing the toys, sometimes sweeping them off the table top—it also occasionally used different means to achieve the same end result.

That level of adaptation is a big deal. Today’s robots—the kinds that work on assembly lines, for example—are already pretty good at copying a human task. “They are not so good at inferring the intention behind a human action and achieving the same goal using different means,” said Rajesh Rao, the director of the Center for Sensorimotor Neural Engineering, and one of the lead researchers on the project.

One of the advantages robots have over humans, though, is the ability to access and distill huge troves of data and information, and the ability to share information with other machines instantaneously. “Eventually, robots might be able to learn complicated tasks more quickly than babies if they are provided with more powerful sensors, more versatile actuators, and sufficient computational power to implement human-inspired learning strategies,” Rao told me.

By emulating human development, Rao and his colleagues believe that robots will be able to learn progressively more sophisticated skills just by watching and imitating other humans and robots.

“We are convinced that bringing together the roboticists and developmental psychologists may allow us to combine the best of human learning and the best of machine learning to the benefit of both,” Meltzoff said.

“I’m trying to teach the roboticists to think like a baby. And I mean that in a good way.”