The tweet came in Monday afternoon: “@kavehwaddell ...what are they looking for that isn't backed up to iCloud data—and can produce if subpoenaed.”

To most people, that tweet is the opposite of clickbait: It’s opaque, technical, and kind of boring. But the tweet wasn’t directed at most people—it was directed at me, a digital-privacy reporter who’s written extensively on encryption and the sorts of data that technology companies can and can’t turn over to law enforcement if compelled.

The tweet came with a shortened URL, hiding its destination behind an unrecognizable series of characters. Had I clicked on it out of curiosity, I could’ve ended up anywhere—on a site laden with malware, for example, or a sign-in page designed to trick me into giving up usernames and passwords.

As it turned out, I was expecting this particular tweet: Using code released Monday by a ZeroFOX, a cybersecurity company focused on threats from social media, my colleague Andrew McGill and I spent the afternoon sending each other—and some other willing participants at The Atlantic—computer-generated spam tweets.

In 2014, a social-media analytics company estimated that nearly one in 10 tweets was spam. A lot of Twitter spam is artless: misspelled, irrelevant, or transparently fake. Some operations are more sophisticated, like a network of accounts that hawks weight-loss pills, earning a referral bonus every time someone clicks on a link, or a fake bank customer-service account that tries to trick users into giving up their login information, which hackers can use or sell for profit. (This is known as phishing.) Whether they’re trying to convince people to buy something or trying to steal their identities, spam and phishing messages are designed simply to get people to click.

Generally, automated, mass-produced scams don’t come across as authentic—and therefore are less successful—while more personalized phishing attacks known as “spearphishing” get more clicks, but are more time consuming to put together. But the developers behind the ZeroFOX Twitter bot wanted to see if they could combine the scale of a large spamming enterprise with the custom feel of individualized messages.

The result: SNAP_R (“Social Network Automated Phishing with Reconnaissance,” pronounced “snapper,” like the fish). It uses machine learning to trawl a sea of users for the most valuable targets and quickly develop profiles of each target based on what they’ve tweeted about in the past. Using that data, the bot can craft unique tweets that push the target’s buttons. (For me: nerdy computer stuff.) Then, it picks the best time to send the tweet: a random minute during the hour of the day each target is most likely to be active.

SNAP_R makes sending large amounts of tailored spam frighteningly easy. Once Andrew and I set up the bot, we just had to point it at a list of users, tell it what link to include in the tweets, and hit go. Within seconds, the tweets began pouring out.

Here’s our bot, R. Waldo Spammerson: He’s named, of course, after one of The Atlantic’s illustrious founders, whose legacy he drags kicking and screaming into the seedy world of digital scamming. Scroll through Spammerson’s timeline and you’ll see a hodgepodge of tweets that make varying degrees of sense, but still often struck a chord with their targets.

“Not bad, bot,” said one of my colleagues, upon receiving her personalized spearphishing tweet. “Relevant to my interests.” Another exclaimed: “How did it formulate that?!”

A few of our bot’s greatest hits:

Rather than sending people to a malicious site, our shortlinks simply sent people to google.com. That’s where John Seymour and Philip Tully, the ZeroFOX security researchers behind SNAP_R, sent their test victims, too. Last month, before Seymour and Tully formally presented their creation at DEF CON, a popular annual hacker convention in Las Vegas, they ran a crucial test: Could their bot compete with a human to trick more people into clicking a dummy link?

The matchup: SNAP_R versus Thomas Fox-Brewster, a privacy and security reporter at Forbes. Both contestants made a handful of fake Twitter accounts, and set their sights on users tweeting with three active hashtags: #PokemonGo, #InfoSec (that’s “information security”), and #GOPConvention. They had two hours to rack up as many clicks as they could.

Fox-Brewster, a modern-day John Henry, chronicled his bitter fight in-depth over at Forbes, but the short version is that he lost. Big time. He sent 129 tweets over the course of the two hours—just over a tweet a minute—and got 49 hapless victims to click on his spam. SNAP_R sent 819 tweets during that same time, and caught 275 victims.

SNAP_R’s average success rate was about 30 percent. That’s far better than the usual success rate with automated phishing, which is between 5 and 15 percent, but not quite as good as Fox-Brewster did, at 45 percent. The bot still was able to net five times more clicks than Fox-Brewster, though, because it didn’t have to tweet manually.

Tully says artificial intelligence has been used increasingly for defense—cybersecurity companies are putting resources into developing network protections that evolve and learn on their own—but that it’s less often used offensively. “The paradigm shift we wanted to emphasize with this talk is flipping this on its head: It’s using data science and really complex analytical techniques to weaponize social media,” he said.

Twitter in particular proved a great platform for an AI-powered spamming operation. The ZeroFOX researchers said Twitter’s character restrictions gives their bot less space to make revealing mistakes like bad grammar or punctuation, and allows it to get away with unorthodox constructions that might’ve been a red flag over email. Its culture, too, makes phishing easier, they said: Oversharing is the norm, which provides SNAP_R with ample fodder to impersonate users. Getting tweets from strangers is much more common than getting an email out of the blue, and shortlinks are everywhere.

“Everybody’s well-aware that you don’t click links in emails, you don’t open attachments,” said Evan Blair, the cofounder of ZeroFOX. “On social media, you don’t have the same awareness. You have this trust factor because you see a profile picture, you see connections.”

The version of SNAP_R that Andrew and I deployed this week isn’t quite the beast in all its glory. The one ZeroFOX developed has some extra tricks up its sleeve: It can scan targets’ Twitter feeds for sneaky details like tweets about an upcoming event (a conference or sports game), tweets with embedded locations, and even the emotions behind the tweets, in order to tailor its attack even more precisely.

The full version also can use two different models—Markov chains or long short-term memory—to generate the language for the tweets. The Markov chain model is somewhat simpler: It chooses the best word to put after each previous word in a sentence based on the probabilities it sees in the source data (in this case, a user’s previous tweets). By contrast, using a model with long short-term memory allows the bot to “remember” what’s already been written in the sentence, leading to more coherent phrases that are less prone to wander off topic.

The Markov chain model does have its upsides. It’s much easier to train, for one, so it can run much more quickly. And because its output is based entirely upon patterns the bot can find in the input data—a user’s own tweets—it can easily tweet in whatever language the target is using.

Seymour and Tully hobbled the code they made available to the public because they didn’t feel comfortable releasing the tool in its entirety. Their goal was to publish a proof-of-concept, not give spammers the tools to swindle Twitter users even more easily. “We’re trying to walk a fine line between responsible awareness and disclosure,” Blair said. “We wanted to be careful not to release something that can full-on destroy the engagement that these networks have.”

That said, the extra features came with diminishing gains, the developers said. Using long short-term memory or basing tweets on the emotional context of users’ tweets didn’t significantly change clickthrough rates. “We were getting absurdly high clickrates with what we just released,” Seymour said.

As spearphishing spambots get frighteningly smart, can Twitter prevent them from taking over the social network? The company already has more than a few tools at its disposal to stop spam attacks: It has a fast-acting spam-reporting system, and it monitors the bots that plug into its network for malicious patterns, banning those that break its rules. (As they were testing SNAP_R, the developers said they had numerous fake accounts banned. In our own testing, Andrew and I made too many requests for Twitter’s data and were temporarily suspended from accessing it, and even one of Fox-Brewster’s human-curated spam accounts was prevented from tweeting for 20 minutes because he copy and pasted the same message too many times.)

Companies that provide URL shorteners, too, could do more to vet the destinations for the links they’re asked to create, the researchers said. Twitter’s own t.co shortener warns users if a link they click on leads to an unsafe location—but Google’s doesn’t.

Malicious tweets and links will always filter through the cracks, and it will eventually fall to individual users to stay away from bad links. The skepticism with which people are taught to approach email would go a long way if applied to social networks as well.

Best learn the lesson now: Automated spam will likely only grow in scale and accuracy as machine learning becomes easier and more intelligent.

“As technology becomes more democratized, you don’t necessarily need to have a Ph.D. to be able to program these things,” said Tully. “And as these tools become more open, free, and readily available, this is going to start to become more popular. And that could be dangerous.”