Humans are startlingly bad at detecting fraud. Even when we’re on the lookout for signs of deception, studies show, our accuracy is hardly better than chance.

Technology has opened the door to new and more pervasive forms of fraud: Americans lose an estimated $50 billion a year to con artists around the world, according to the Financial Fraud Research Center at Stanford University. But because computers aren’t subject to the foibles of emotion and what we like to call “intuition,” they can also help protect us. Here’s how leading fraud researchers, neuroscientists, psychiatrists, and computer scientists think technology can be put to work to fight fraud however it occurs—in person, online, or over the phone.

1. Suspicious Story Lines

Spam filters are supposed to block e-mail scams from ever reaching us, but criminals have learned to circumvent them by personalizing their notes with information gleaned from the Internet and by grooming victims over time.

In response, a company called ZapFraud is turning to natural-language analytics: Instead of flagging key words, it looks for narrative patterns symptomatic of fraud. For instance, a message could contain a statement of surprise, the mention of a sum of money, and a call to action. “Those are the hallmark expressions of one particular fraud e-mail,” Markus Jakobsson, the company’s founder, told me. “There’s a tremendous number of [spam] e-mails, but a small number of story lines.”

In the future, this technology could go beyond e-mail filtering to also flag text messages, interactions on social media, messages on dating sites, even years-long “friendships.” Aaron Emigh, ZapFraud’s interim CEO, told me he’d stopped a woman from wiring money to a “fellow widow” she’d met on a Christian site for grieving people. He hopes that as natural-language analytics evolves, such warnings can be wholly automated.

2. Truth Filters

A similar approach could help combat fraud by flagging false statements on social media. (Disinformation creates opportunities for con artists to profit. In 2015, for instance, a scammer posted a fake Bloomberg article with news of a Twitter buyout offer—moving markets and making a little cash in the process.)

Kalina Bontcheva, a computer scientist who researches natural-language processing at the University of Sheffield, in England, is leading a project that examines streams of social data to identify rumors and estimate their veracity by analyzing the semantics, cross-referencing information with trusted sources (such as PubMed, for medical information), identifying the point of origin and pattern of dissemination, and the like. Bontcheva is part of a research collaboration called Pheme, which plans to flag misleading tweets and posts and classify them by severity: speculation, controversy, misinformation, or disinformation.

3. In-the-Moment Warnings

Picture yourself walking down the street when a man approaches and asks for bus fare; he says he lost his wallet and needs to get home. Right away, your phone buzzes with a notification: Stay away. He’s a fraud. The same voice has been asking for money in different locations all week. Such a possibility sounds far-fetched, but your phone company already gathers information from all the phones in its network, and several tech firms are developing voice-biometrics software that can identify individuals and even catch emotional patterns that may indicate deceit.

“It’s not far off that our smartphone or watch is listening in to all of our conversations and understanding them,” Emigh told me. “It opens up the possibility of employing [fraud-prevention] technology across lots of in-person domains, not just e-mail.” Imagine, he said, that a fraud-prevention company has enough data on your behavior—where you are, what you’re doing (an increasingly likely reality, given the ever-expanding capabilities of cellphones and Americans’ willingness to trade personal privacy for convenience), and so on—to be able to give a heads-up anytime someone tries to take advantage of you. “If you’re an elderly couple who gets a panicked call from a hospital in Mexico that your grandson is in a coma, it’s red-flagged because he’s not in Mexico,” Emigh said. We would have a constant spy watching us—but one that does its best to act as our friend and protector.

4. Spotting Trends

Another approach comes from Big Data—combing statistics to find patterns that should tip us off to fraud. By analyzing all the companies that sell a certain kind of product, for instance, you could flag anything anomalous—one firm’s sudden spike in canceled contracts, for example—that might indicate sketchy activity. The method is similar to the one employed for credit-card fraud alerts—if you don’t usually travel abroad and suddenly buy groceries in Panama, your transaction is flagged—but on a much bigger scale. A company called Sift Science is attempting something along these lines; it uses proprietary algorithms to analyze data trends and discern patterns of possible fraud.

Information gleaned from patterns in fraud can also be funneled directly to potential victims. AARP has been reviewing recordings of hundreds of fraudulent phone calls obtained by the FBI in order to analyze the persuasive tactics used by con artists, and then teaching its members about those tactics. For instance, fraudsters use something known as “phantom fixation”—encouraging you to focus on a huge future gain that far outweighs any investment you might need to make in the present. Studies show that telling people about such techniques can help them recognize a hoax.

5. Minority Report for Fraud

Perhaps one day we’ll be able to identify and block not just scams but the scammers themselves—before they even target their first victim.

Each year, the Association of Certified Fraud Examiners conducts a study of known scammers. It looks at demographic information, distinguishing characteristics, and patterns of approach in order to gain insights on the types of people most likely to commit fraud in the future. In 50 years, Bruce Dorris, the organization’s vice president and program director, told me, “I wouldn’t be surprised if you could isolate precisely who those individuals are.”

As our understanding of fraud evolves, we might one day be able to develop predictive algorithms that could identify would-be con artists based on patterns of behavior. Or perhaps we’ll use brain scans. Some scientists claim that brain scans can reveal psychopathic tendencies. What if we could similarly identify characteristics of likely con artists, and then intervene before they cause trouble?

“It’s possible that 50 years out,” Emigh told me, “authorities will be able to figure out the plausibility of fraud and identify potential bad actors. There’s also a possibility, of course, that we decide that’s not the world we want to live in.”

6. Enhanced Lie Detection

No method of fraud prevention will be perfect. “You can put seven locks on your door, fingerprint technology, a retinal display. And you forget to close the window,” Moran Cerf, a professor of business and neuroscience at Northwestern University and a former hacker, told me. “The only way to prevent fraud completely is to eliminate humans from the process. They are the weakest link.”

When scammers do make it through our safeguards, new lie-detection techniques could prove useful after the fact. Over the past few years, methods that involve analyzing fleeting facial expressions or screening for a certain pheromone associated with stress have shown promising results.

The most widely anticipated approach, however, involves watching what goes on inside the brain. At the University of Pennsylvania, an associate professor of psychiatry named Daniel Langleben studies the ways in which neural activity can signify lying. Langleben hypothesizes that suppressing the truth requires additional cognitive operations that can be detected by fMRI. He also looks for so-called concealed information, which indicates that people know something they shouldn’t: Does your brain scan show that you recognize a fraud victim, for instance, after you said you don’t know him? In a forthcoming paper, Langleben and his team report that the fMRI-based method outperformed traditional polygraphy by at least 14 percent.

“There’s one caveat to all of this,” Langleben said. “What’s really important is how you ask the question. A flawed questioning technique can’t be helped by a fancy scanner.”

A Brief Chronicle of Fraud

Circa 300 b.c.: In the earliest fraud attempt on record, a Greek merchant tries to sink his ship and collect insurance.

1496: A 20-year-old Michelangelo forges an ancient sculpture of Cupid and sells it to a cardinal.

1704: A Frenchman claiming to be a native of Formosa (modern-day Taiwan) publishes a book describing made-up customs like drinking viper’s blood for breakfast.

1863: President Lincoln signs the False Claims Act to counter the sale of fraudulent supplies to the Union Army.

1920: Charles Ponzi collects about $15 million in eight months through his fraudulent investment company.

1925: An Austro-Hungarian con man known as “The Count” sells the Eiffel Tower to a scrap-metal dealer.

1989: Nigerian fraudsters send messages via telex to British businessmen, seeking a small investment for a huge future payoff.

1995: British police arrest John Myatt for forging paintings by Monet, van Gogh, Matisse, and other masters.

2065: Neuroscientists learn how to identify characteristics of con artists by their brain scans.