The breakneck speed of emerging technology and the old age of the Constitution make for a number of legislative idiosyncrasies. For example, if a tax agency suspected you of filing fraudulent claims, it would be illegal under the Fourth Amendment for an agent to bug your car with a GPS tracker and follow your movements. But it would be entirely legal for the agent to take pictures of your license plate multiple times throughout the day as you traveled, and then compile those images into a searchable database.
The images themselves may or may not be evidence of wrongdoing, but narratives emerge from the database. A change in your morning commute could speak to a change in employment status. Monthly visits to a pharmacy specializing in care for sexually transmitted infections could reveal intimate details of health or sexuality. Trips to Goodwill or a food pantry could speak to income woes. Or the reverse: Stops at Louis Vuitton could speak to a recent windfall, particularly of interest if someone is suspected of hiding money in offshore accounts.
Companies are building tools to exploit these legal idiosyncrasies—and collecting massive amounts of data in the process. A recent investigation from the Electronic Frontier Foundation, a privacy nonprofit, found that officials from the Sacramento, California, Department of Human Assistance opened a $10,000 contract with Vigilant Solutions, a private company that collects and stores license-plate reader data, for use in welfare-fraud investigations.
License-plate readers are smart cameras arched covertly on traffic lights above busy intersections in cities including Oakland and Los Angeles. The cameras take photos of license plates, automatically tagging them with time stamp and location, then upload them to a database of more than 2 billion images dating back several years. Eighty million new photos are quietly uploaded to Vigilant’s database each month.
Documents reveal that Department of Human Assistance investigators accessed the database more than 1,000 times from June 2016 to July 2018. Investigators could configure the database to track one license plate over time or, in what Vigilant calls “stakeout” mode, to list all license plates seen in a specified location. It’s not clear how either configuration aided the investigative process, but applicants through the state’s CalWORKS program have asset ceilings, limiting them to $2,250 in their bank accounts. Investigators hoping to ferret out fraud might be interested in knowing where those on public assistance go.
But it’s not just welfare recipients who are being tracked, and it’s not just in physical space. A 2017 report from CBC News found that the Canada Revenue Agency, the CRA to our IRS, has begun to scan the Facebook and Instagram accounts of high-income citizens they suspect of committing tax fraud.
People unknowingly incriminate themselves on social media all the time. In 2015, the San Francisco Police Department assigned a dedicated “Instagram officer” to scan social media for suspects posing in photos with stolen goods, violating house arrest, and the like. Savvy companies are already moving to automate social-media screening for fraud prevention. A 2015 patent from Intuit, the makers of TurboTax, would also use social media to detect fraudulent tax filings, potentially even “notifying authorities” of fraud if discrepancies are noted.
Surveillance for the purpose of fraud detection will likely evolve much in the same way as any other form of “smart” surveillance: First, it will be partially automated, then fully automated, then persistent and even predictive. Visa and Mastercard are already investing in start-ups touting continuous behavioral biometrics, fraud-detection technology that measures the way people use their devices: how they scroll, their reading pace, the angle at which they hold their phone, etc. If the technology detects a user abruptly changing these unconscious cues while accessing a mobile banking account, the transactions may be flagged as suspicious.
But as with other types of anti-fraud surveillance, this kind of monitoring reveals much more information than intended. The data from unconscious movements—mouse clicks, trips to the grocery store, a shared photo someone thought only friends would see—can be used in many different ways. Researchers in a 2017 study analyzed the mouse movements of Bing users searching for symptoms related to Parkinson’s disease. They were able to use cursor movements to infer tremors—subtle, involuntary shaking motions associated with neurodegeneration. They ultimately found correlations between mouse tremors and the severity of the disease, presumably before any user had a formal diagnosis.
It’s not all that hard to imagine companies compiling behavioral data from mobile banking users, establishing correlations between users of certain income ranges or credit scores, and these unconscious cues to create a sort of “behavioral” credit score. Amazon, eBay, and dozens of other online retailers introduced us to a similar concept years ago: Customers “like you” interacted with varying products, using shopping behavior to suggest shopping behavior. Retailers already track, categorize, and make inferences about people based on what they’ve bought and how they shop. The next step is doing so invisibly.