Artur Debat / Getty

This past summer, Josh Hawley, a Republican senator from Missouri, introduced the Social Media Addiction Reduction Technology Act, which—beyond its forced acronym—was remarkable for how aggressively it would regulate the design of certain tech products.

Among other provisions, the law would ban auto-play videos on sites such as YouTube. It would require sites such as Twitter to deploy a mechanism that “automatically limits the amount of time that a user may spend to 30 minutes a day.” It prohibits sites such as Pinterest from automatically revealing content when the user scrolls to the bottom of the page, instead “requiring the user to specifically request … that additional content be loaded and displayed.” The SMART Act would do all this, according to its preamble, to protect unsuspecting people from “practices that exploit human psychology or brain physiology to substantially impede freedom of choice.”

Americans have heard this kind of infantilizing rhetoric before. In 1938, the film Reefer Madness attempted to frighten teenagers into submission. Lured in by drug pushers, high-school-age characters smoked weed, lost their sanity, and committed unspeakable crimes. Reefer Madness became a cult classic for reasons that its creators never intended. Its salacious portrayal of zombified youths wreaking havoc did not conform to most people’s real-life experience, and viewers saw through the filmmakers’ agenda of stoking fear rather than providing insight.

The irony is that, just as voters in states across the country are rejecting exaggerated claims about marijuana’s harms and legalizing the drug, alarm over allegedly addictive technology is on the rise.

In recent years, CBS’s 60 Minutes featured Anderson Cooper interviewing Tristan Harris, director of the Center for Humane Technology, who claims technology is leading to “human downgrading” and is “destroying our kids’ ability to focus.” The Washington Post ran a headline declaring “Subtle and Insidious, Technology Is Designed to Addict Us.” Even The Atlantic ran a piece that asked “Have Smartphones Destroyed a Generation?”—which, naturally, went viral on everyone’s smartphone.

A slew of books, with titles such as Irresistible: The Rise of Addictive Technology and the Business of Keeping Us Hooked, The Hacking of the American Mind: The Science Behind the Corporate Takeover of Our Bodies and Brains, and Glow Kids: How Screen Addiction Is Hijacking Our Kids—And How to Break the Trance, paint a bleak portrait of the human psyche under the trance of internet-connected devices.

Moral panics are often based on half-truths. Reefer Madness wasn’t all wrong. According to the National Institute on Drug Abuse, fully 9 percent of people who consume marijuana develop a “cannabis use disorder,” even though the drug is widely understood not to be chemically addictive (at least not in the way nicotine, alcohol, or heroin leads to compulsive dependence).

Similarly, personal technologies are potentially addictive to some people, but like cannabis, not to everyone. By promoting the idea that technology is hijacking our brains and getting all of us addicted to our devices, techno-fearmongers promote the exception rather than the rule. They redirect the debate to the product instead of the underlying causes of addiction for the unfortunate few suffering from the pathology. The fact is, the vast majority of people are not and will never become addicted to their devices or their favorite social-media platforms just as almost no one gets addicted to alcohol from having a glass of wine with dinner or addicted to pot from toking up from time to time.

Clearly, the extreme use of pretty much anything can be harmful. However, for those who use marijuana or Facebook moderately, the negative effects are negligible. While headlines spread fears about addictive technology, the data show that almost nothing is happening. Earlier this year, Scientific American reported on a study of 350,000 adolescents that found that technology use had “a nearly negligible effect on adolescent psychological well-being.” The article added, “Eating potatoes is associated with nearly the same degree of effect and wearing glasses has a more negative impact on adolescent mental health.”

The terms we use matter, and appropriating the term addiction is a proven way to attract more clicks for an article and more attention from policy makers and donors. Yet we are not all addicted to technology. The correct term for what most people experience when overusing tech is much less scary—it is not an addiction, but a distraction.

Distraction is far less frightening, and managing it requires just a few simple steps. For instance, for decades psychologists have extolled the power of setting an “implementation intention”—a fancy way of describing the practice of planning out what we are going to do and when we’re going to do it. If we don’t plan ahead, it’s easy to fill our day with scrolling, pecking, and checking. However, we can’t call something a distraction unless we know what it is distracting us from. By scheduling our day, we become more likely to spend our time doing what we really want instead of what tech companies want. Another effective technique involves changing notification settings to ensure we use our devices on our schedule instead of the app makers’. Remember, once we turn off the persistent pinging and dinging, Mark Zuckerberg can’t turn it back on.

The pathology of genuine addiction is serious, but headlines about “digital heroin” that turns kids into psychotic junkies only trivialize that pathology. Tech companies should implement policies to identify people who are truly addicted, but the rest of us should be left to do what we wish. Succumbing to a new, 21st-century moral panic—at the very moment Americans are rejecting a 20th-century one—would be the real madness.

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.