The tweet came in Monday afternoon: “@kavehwaddell ...what are they looking for that isn't backed up to iCloud data—and can produce if subpoenaed.”
To most people, that tweet is the opposite of clickbait: It’s opaque, technical, and kind of boring. But the tweet wasn’t directed at most people—it was directed at me, a digital-privacy reporter who’s written extensively on encryption and the sorts of data that technology companies can and can’t turn over to law enforcement if compelled.
The tweet came with a shortened URL, hiding its destination behind an unrecognizable series of characters. Had I clicked on it out of curiosity, I could’ve ended up anywhere—on a site laden with malware, for example, or a sign-in page designed to trick me into giving up usernames and passwords.
As it turned out, I was expecting this particular tweet: Using code released Monday by a ZeroFOX, a cybersecurity company focused on threats from social media, my colleague Andrew McGill and I spent the afternoon sending each other—and some other willing participants at The Atlantic—computer-generated spam tweets.
In 2014, a social-media analytics company estimated that nearly one in 10 tweets was spam. A lot of Twitter spam is artless: misspelled, irrelevant, or transparently fake. Some operations are more sophisticated, like a network of accounts that hawks weight-loss pills, earning a referral bonus every time someone clicks on a link, or a fake bank customer-service account that tries to trick users into giving up their login information, which hackers can use or sell for profit. (This is known as phishing.) Whether they’re trying to convince people to buy something or trying to steal their identities, spam and phishing messages are designed simply to get people to click.