Between 2011 and 2014, a group called EU Kids Online conducted comprehensive studies, looking at children in 22 European countries and across many cultures. A strong majority of children used the Internet to visit social-networking sites like Facebook and to watch video clips on sites like YouTube. About half used the Internet for instant messaging and to do schoolwork. About one-third used it for Internet gaming, slightly less to download movies or music, and less again to read the news.

A similarly comprehensive study was done in the United States in 2014 by four researchers from the fields of education and psychology. A national sample of 442 children between the ages of 8 and 12, or what is called “middle childhood,” were asked how they spent their time online. Younger children (8 to 10 years) spent an average of 46 minutes per day on a computer, compared with older ones (11 to 12 years), who spent one hour and 46 minutes per day on a computer.

When asked what kinds of sites they visited, YouTube dominated significantly, followed by Facebook, and game and virtual-world play sites—Disney, Club Penguin, Webkinz, Nick, Pogo, Poptropica, PBS Kids—all designed for this age group—and Google. Children with mobile phones (14 percent of 8-to-12-year-olds in the study) played a lot of Angry Birds, a game that started as a phone app and is still primarily accessed that way.

Angry Birds, Club Penguin ... that sounds fine, doesn’t it?

But wait a second. What about Facebook? Don’t you have to be 13 years old to activate an account? Yes, but guess what? One-quarter of the children in the U.S. study reported using Facebook even though it is a social network meant for teenagers and adults. These are the hidden users of social networks, the ones who aren’t supposed to be there—but are. I think of them as “the Invisibles.” It wasn’t just 11-to-12-year-olds who were going there: 34 percent of the Facebook users in the study were 8-to-10-year-olds. In the EU study, one-quarter of the 9-to-10-year-olds and one-half of the 11-to-12-year-olds were using the site as well: Four out of 10 gave a false age.

Twenty million minors use Facebook, according to Consumer Reports; 7.5 million of these are under 13. (But this 2011 study is already out-of-date. I wonder what the figures are now.) These underage users access the site by creating a fake profile, often with the awareness and approval of their parents. The technology editor of the Consumer Reports survey was troubled by the fact that “a majority of parents of kids 10 and under seemed largely unconcerned by their children’s use of the site.” Instagram has similar issues. The vast majority of the site’s reported 400 million users are a young demographic, between 18 and 29 years old, but studies report that it is the most-used photography site for 12-to-17-year-olds.

Identity and age verification online are complex issues. One of the popular jokes about this comes from a New Yorker cartoon that ran in 1993. The cartoon shows a dog sitting in front of a computer, and underneath the drawing, it says: “On the Internet, nobody knows you are a dog.” It would appear that nobody knows if you’re a puppy either.

Setting the minimum age for Facebook and Instagram at 13 years is a data-protection requirement by law in the United States, but this doesn’t appear to be strictly enforced. Why? In terms of scale, Facebook has 1.65 billion active members (as of May 2016) who make one post a day on average, including the uploading of 300 million images. Could these companies monitor and police illegal use of the site? When asked, Simon Milner, a senior executive with Facebook, said that it would be “almost impossible.” He has told The Guardian, “We haven't got a mechanism for eradicating the problem [of underage users].”

Facebook and other social networks have always claimed that it is difficult—or “almost impossible”—to identify a child, and therefore they can’t actively implement and police their own rules. But let’s think about this for a moment. When a kid opens up a Facebook account, the first thing he or she typically does is put up a profile photograph, and then “friend” a bunch of schoolmates who are usually the same age. They go on to post comments about school, classmates, and extracurricular activities. If you can’t figure out that these kids are 9 or 10, you aren’t very smart. They are constantly providing photographic evidence of their age. Another piece of evidence that makes me suspect that these social-networking sites are not particularly interested in monitoring this problem: In 2016, Facebook awarded $10,000 to a 10-year-old boy from Finland, a coding ace who discovered a security flaw in Instagram. Won’t this only encourage more underage use?

The psychologists and educators behind the large U.S. study in 2014 concluded that the results were troubling, particularly in regard to the developmental repercussions of children’s online habits. “Engaging in these online social interactions prior to necessary cognitive and emotional development that occurs throughout middle childhood could lead to negative encounters or poor decision-making. As a result, teachers and parents need to be aware of what children are doing online and to teach media literacy and safe online habits at younger ages than perhaps previously thought.”

* * *

Obviously quite a number of parents are simply looking the other way. Perhaps they are quietly relieved, even proud, to see that their children are making “friends,” usually a sign of social thriving and happiness. I think they need reminding about how ramped up the cruelty can be online. If you think girls of middle school age have always been mean, you’ve not seen what they can do in the escalated environment of the Internet.

The stories of self-harm, even suicide, are growing in number—and, of course, the subject of cyberbullying has become an international conversation. In a poll conducted in 24 countries, 12 percent of parents reported their child had experienced cyberbullying—which is defined as repeatedly critical remarks and teasing, often by a group. A U.S. survey by Consumer Reports found that 1 million children over the previous year had been “harassed, threatened, or subjected to other forms of cyberbullying” on Facebook.

What is the explanation for it?

In general, the younger you are, the number of friends you have on a social network increases. Let’s look at how the numbers work on Facebook, in a 2014 study of American users. For those over 65 years old, the average number of friends is 102. For those between 45 and 54 years old, the average is 220. For those 25 to 35 years old, the average is 360. For those 18 to 24, the average is 649. What does that mean for the under-13s, the social media Invisibles? The answer is, Who knows? There are no reliable numbers.

Let’s for a second discuss the sheer social madness of that. As the work of Robin Dunbar, a psychologist and anthropologist at the University of Oxford, has argued, primates have large brains because they live in socially complex societies. In fact, the group size of an animal can be predicted by the size of its neocortex, especially the frontal lobe. Human beings, too, have large brains because we tend to live in large groups.

How large? Given the size of the average human brain, the number of social contacts or “casual friends” with whom an average individual can handle and maintain stable social relationships is around 150. (It is called Dunbar’s number.) This number is consistent throughout human history—and is the size of the modern hunter-gatherer societies, the size of most military companies, most industrial divisions, most Christmas card lists (in Britain, anyway), and most wedding parties

Anything much beyond Dunbar’s number is too complicated to handle at optimal processing levels.

Now imagine the child who has a Facebook page and an Instagram account, who participates on Snapchat, WhatsApp, and Twitter. Throw into that mix all the mobile phone, email, and text contacts. A child who is active online, and interested in social media, could potentially have thousands of contacts.

We are not talking about an intimate group of friends. We are talking about an army. And who’s in this army? These aren’t friends in any real-world sense.

I have been working on a mathematical formula to predict the prevalence of antisocial behavior online—in hopes of designing an algorithm to identify incidences of bullying. How?

Locard’s exchange principle is the basic premise of forensic science. It dictates that every contact leaves a trace, and nowhere is this more true than online. Unlike the playground, where the mean words of a bully disappear instantly into the ether—unless there is an eyewitness—online it is just the opposite. Cyberbullying is nothing but evidence: a permanent digital record. So how did we get to the point where it became more problematic than real-world bullying? My answer is taken from The Usual Suspects, one of my favorite movies, in which Kevin Spacey delivers the immortal line “The greatest trick the devil ever pulled was convincing the world he didn’t exist.”

To me, the greatest trick social-media and telecom companies ever pulled is trying to convince us that they can do nothing about cyberbullying.

In terms of digital forensics, it is a cybercrime with big fingerprints. Using an approach that I am calling the math of cyberbullying, both victims and perpetrators can be identified.

Many of the big-data “social analytics” outfits like Brandwatch, SocialBro, or Nielsen Social use algorithms to identify or estimate much more complicated things, like a Twitter user’s age, sex, political leanings, and education level. How hard would it be to create an algorithm to identify antisocial behavior, bullying, or harassment online? My equation goes like this: d x c (i x f) = cyberbullying.

The math would be this simple:

I am bullying you = direction (d)

bitch, hate, die = content (c)

interval (i) and frequency (f) = escalation

I am actively working with a tech company in Palo Alto to apply the Aiken algorithm to online communication. To develop the c (content) database, I plan on launching a nationwide call for content. Every person who has ever received a hateful bullying message can forward it to our repository. In that way, victims of cyberbullying can become an empowering part of the solution to an ugly but eminently solvable big-data problem. We just need the collective will to address it.

The algorithm can be set to automatically detect escalation in a cyberbullying sequence, and a digital outreach can be sent to the victim: “You need to ask for help. You are being bullied.” And simultaneously an alert can be sent to parents or guardians telling them something is wrong and encouraging them to talk to their child.

The beauty of the design is twofold: First, only artificial intelligence would be screening the transactions, which will be incredibly efficient for a big-data problem such as cyberbullying, and second, there would be no breach of privacy for the child. Parents wouldn’t need to see the content, only be alerted when there appeared to be a problem. I know there could be an outcry about surveillance, but we are talking about minors, and we are talking about an opt-in solution with parental consent. This is not surveillance; it’s called parenting.

Ultimately the algorithm could reflect jurisdictional law in the area of cyber-harassment against a minor and be designed to quantify and provide evidence of a crime. One day, it could involve sending digital deterrents to the cyberbully, which is a way to counter what cyberpsychologists call “minimization of status and authority online.” We can show young people that there are consequences to their behavior in cyberspace.


This article is excerpted from Mary Aiken’s book, The Cyber Effect.