The Touch-Screen Generation

Young children—even toddlers—are spending more and more time with digital technology. What will it mean for their development?

In 2001, the education and technology writer Marc Prensky popularized the term digital natives to describe the first generations of children growing up fluent in the language of computers, video games, and other technologies. (The rest of us are digital immigrants, struggling to understand.) This term took on a whole new significance in April 2010, when the iPad was released. iPhones had already been tempting young children, but the screens were a little small for pudgy toddler hands to navigate with ease and accuracy. Plus, parents tended to be more possessive of their phones, hiding them in pockets or purses. The iPad was big and bright, and a case could be made that it belonged to the family. Researchers who study children’s media immediately recognized it as a game changer.

Previously, young children had to be shown by their parents how to use a mouse or a remote, and the connection between what they were doing with their hand and what was happening on the screen took some time to grasp. But with the iPad, the connection is obvious, even to toddlers. Touch technology follows the same logic as shaking a rattle or knocking down a pile of blocks: the child swipes, and something immediately happens. A “rattle on steroids,” is what Buckleitner calls it. “All of a sudden a finger could move a bus or smush an insect or turn into a big wet gloopy paintbrush.” To a toddler, this is less magic than intuition. At a very young age, children become capable of what the psychologist Jerome Bruner called “enactive representation”; they classify objects in the world not by using words or symbols but by making gestures—say, holding an imaginary cup to their lips to signify that they want a drink. Their hands are a natural extension of their thoughts.

Norman Rockwell never painted Boy Swiping Finger on Screen, and our own vision of a perfect childhood has never adjusted to fit that now-common tableau.

I have two older children who fit the early idea of a digital native—they learned how to use a mouse or a keyboard with some help from their parents and were well into school before they felt comfortable with a device in their lap. (Now, of course, at ages 9 and 12, they can create a Web site in the time it takes me to slice an onion.) My youngest child is a whole different story. He was not yet 2 when the iPad was released. As soon as he got his hands on it, he located the Talking Baby Hippo app that one of my older children had downloaded. The little purple hippo repeats whatever you say in his own squeaky voice, and responds to other cues. My son said his name (“Giddy!”); Baby Hippo repeated it back. Gideon poked Baby Hippo; Baby Hippo laughed. Over and over, it was funny every time. Pretty soon he discovered other apps. Old MacDonald, by Duck Duck Moose, was a favorite. At first he would get frustrated trying to zoom between screens, or not knowing what to do when a message popped up. But after about two weeks, he figured all that out. I must admit, it was eerie to see a child still in diapers so competent and intent, as if he were forecasting his own adulthood. Technically I was the owner of the iPad, but in some ontological way it felt much more his than mine.

Without seeming to think much about it or resolve how they felt, parents began giving their devices over to their children to mollify, pacify, or otherwise entertain them. By 2010, two-thirds of children ages 4 to 7 had used an iPhone, according to the Joan Ganz Cooney Center, which studies children’s media. The vast majority of those phones had been lent by a family member; the center’s researchers labeled this the “pass-back effect,” a name that captures well the reluctant zone between denying and giving.

The market immediately picked up on the pass-back effect, and the opportunities it presented. In 2008, when Apple opened up its App Store, the games started arriving at the rate of dozens a day, thousands a year. For the first 23 years of his career, Buckleitner had tried to be comprehensive and cover every children’s game in his publication, Children’s Technology Review. Now, by Buckleitner’s loose count, more than 40,000 kids’ games are available on iTunes, plus thousands more on Google Play. In the iTunes “Education” category, the majority of the top-selling apps target preschool or elementary-age children. By age 3, Gideon would go to preschool and tune in to what was cool in toddler world, then come home, locate the iPad, drop it in my lap, and ask for certain games by their approximate description: “Tea? Spill?” (That’s Toca Tea Party.)

As these delights and diversions for young children have proliferated, the pass-back has become more uncomfortable, even unsustainable, for many parents:

He’d gone to this state where you’d call his name and he wouldn’t respond to it, or you could snap your fingers in front of his face …
But, you know, we ended up actually taking the iPad away for—from him largely because, you know, this example, this thing we were talking about, about zoning out. Now, he would do that, and my wife and I would stare at him and think, Oh my God, his brain is going to turn to mush and come oozing out of his ears. And it concerned us a bit.

This is Ben Worthen, a Wall Street Journal reporter, explaining recently to NPR’s Diane Rehm why he took the iPad away from his son, even though it was the only thing that could hold the boy’s attention for long periods, and it seemed to be sparking an interest in numbers and letters. Most parents can sympathize with the disturbing sight of a toddler, who five minutes earlier had been jumping off the couch, now subdued and staring at a screen, seemingly hypnotized. In the somewhat alarmist Endangered Minds: Why Children Don’t Think—and What We Can Do About It, author Jane Healy even gives the phenomenon a name, the “ ‘zombie’ effect,” and raises the possibility that television might “suppress mental activity by putting viewers in a trance.”

Ever since viewing screens entered the home, many observers have worried that they put our brains into a stupor. An early strain of research claimed that when we watch television, our brains mostly exhibit slow alpha waves—indicating a low level of arousal, similar to when we are daydreaming. These findings have been largely discarded by the scientific community, but the myth persists that watching television is the mental equivalent of, as one Web site put it, “staring at a blank wall.” These common metaphors are misleading, argues Heather Kirkorian, who studies media and attention at the University of Wisconsin at Madison. A more accurate point of comparison for a TV viewer’s physiological state would be that of someone deep in a book, says Kirkorian, because during both activities we are still, undistracted, and mentally active.

Because interactive media are so new, most of the existing research looks at children and television. By now, “there is universal agreement that by at least age 2 and a half, children are very cognitively active when they are watching TV,” says Dan Anderson, a children’s-media expert at the University of Massachusetts at Amherst. In the 1980s, Anderson put the zombie theory to the test, by subjecting roughly 100 children to a form of TV hell. He showed a group of children ages 2 to 5 a scrambled version of Sesame Street: he pieced together scenes in random order, and had the characters speak backwards or in Greek. Then he spliced the doctored segments with unedited ones and noted how well the kids paid attention. The children looked away much more frequently during the scrambled parts of the show, and some complained that the TV was broken. Anderson later repeated the experiment with babies ages 6 months to 24 months, using Teletubbies. Once again he had the characters speak backwards and chopped the action sequences into a nonsensical order—showing, say, one of the Teletubbies catching a ball and then, after that, another one throwing it. The 6- and 12-month-olds seemed unable to tell the difference, but by 18 months the babies started looking away, and by 24 months they were turned off by programming that did not make sense.

Anderson’s series of experiments provided the first clue that even very young children can be discriminating viewers—that they are not in fact brain-dead, but rather work hard to make sense of what they see and turn it into a coherent narrative that reflects what they already know of the world. Now, 30 years later, we understand that children “can make a lot of inferences and process the information,” says Anderson. “And they can learn a lot, both positive and negative.” Researchers never abandoned the idea that parental interaction is critical for the development of very young children. But they started to see TV watching in shades of gray. If a child never interacts with adults and always watches TV, well, that is a problem. But if a child is watching TV instead of, say, playing with toys, then that is a tougher comparison, because TV, in the right circumstances, has something to offer.

Presented by

Hanna Rosin is a national correspondent for The Atlantic.

Join the Discussion

After you comment, click Post. If you’re not already logged in you will be asked to log in or register with Disqus.

Please note that The Atlantic's account system is separate from our commenting system. To log in or register with The Atlantic, use the Sign In button at the top of every page.

blog comments powered by Disqus


Cryotherapy's Dubious Appeal

James Hamblin tries a questionable medical treatment.


Confessions of Moms Around the World

In Europe, mothers get maternity leave, discounted daycare, and flexible working hours.


How Do Trees Know When It's Spring?

The science behind beautiful seasonal blooming

More in Technology

More back issues, Sept 1995 to present.

Just In