Automated surveillance allows governments (and others) to data mine the physical world, yet little attention has been paid to the ethics of perpetual recording.
Hello, human, I'm here to see you (MGM).
Over the past decade, video surveillance has exploded. In many cities, we might as well have drones hovering overhead, given how closely we're being watched, perpetually, by the thousands of cameras perched on buildings. So far, people's inability to watch the millions of hours of video had limited its uses. But video is data and computers are being set to work mining that information on behalf of governments and anyone else who can afford the software. And this kind of automated surveillance is only going to get more sophisticated as a result of new technologies like iris scanners and gait analysis.
Yet little thought has been given to the ethics of perpetually recording vast swaths of the world. What, exactly, are we getting ourselves into?
The New Aesthetic isn't just a cool art project; machines really are watching us, and they have their own way of seeing; they make mistakes that humans don't. Before automated surveillance reaches a critical mass, we are going to have to think carefully about whether we think its security benefits are worth the human costs it imposes. The ethical issues go beyond just video; think about data surveillance, about algorithms that can mine your financial history or your internet searches for patterns that could suggest you're an aspiring terrorist. You'd want to be sure that a technology like that was accurate.
Fortunately, our British friends are slightly ahead of the curve when it comes to thinking through the dilemmas posed by ubiquitous electronic surveillance. As a result of an interesting and contingent set of historical circumstances, the British now live under the watchful eye of a massive video surveillance system. British philosophers are starting to gaze back at the CCTV cameras watching them, and they're starting to demand that those cameras justify their existence. In a new paper called The Unblinking Eye: The Ethics of Automating Surveillance, philosopher Kevin Macnish argues that the political and cultural costs of excessive surveillance could be so great that we ought to be as hesitant about using it as as we are about warfare. That is to say, we ought to limit automated surveillance to those circumstances where we know it to be extremely effective. I spoke to Macnish about his theory, and about how technology is changing surveillance, for better and for worse.
I was thinking the other day that it's curious that CCTV should have bloomed in Britain, whose population we think of as being less security-crazed than the population of the United States. British is more urban than America, but it can't just be that, can it?
Macnish: One interesting historical point, and I don't think this clarifies the whole thing but it helps, is that most other western countries have a recent history of some form of dictatorship, the US exempted. Most of the Europe was under a dictator or occupied by a dictatorship within the living memory, and so I think there is an awareness there about the dangers of government. It's possible that Britain might be a little bit more laissez-fare about surveillance because we haven't had that level of autocratic control since the 17th century. I think in America, while the history is a little bit different, you have a very strong social consciousness about separation of powers within the state, and between the state and the people. I think there is a general suspicion of the state in America, which we often just don't have in the U.K.
Then you have to couple that with some very powerful images. In 1993 there was an infamous case of a 2 year-old named James Bulger who was kidnapped by two other children who were themselves about 10 or 11. They kidnapped him and then killed him in a very horrible way that mimicked a murder from one of the Child's Play films, which led to a massive reaction against horror films and whatever else. At the time there was a CCTV image taken of the two boys picking up this toddler and walking off with him, while holding his hand. Ironically, the CCTV didn't actually help with solving the case. The police had already heard about the case of these two boys and were already investigating them, but the image came across on our TV screens and came into our newspapers and it was really powerful. That helped to favor people towards CCTV here. It hadn't been thoroughly researched at the time and it was sort of suspected at a common sense level that it would help deter crime, and that it would detect and catch criminals, and that it would be able to help to find lost children. So, the government poured hundreds and millions of pounds into CCTV cameras all around the country and then retailers and businesses bought CCTV cameras for their own security---it just took off. As a sociological study, it's fascinating. A lot of my American friends that come here feel really freaked out by the amount of cameras we have, and with good reason.
What is automated surveillance? Where and how is it most commonly used? I know the Chinese have been developing a kind of gait analysis, a way to identify people on video based on the length and speed of their stride. In what other ways is this technology gathering steam?
Macnish: There are things like iris recognition, there are areas where people are looking at parts of the face for identification purposes; there are all of these ways that you can now automate the recognition of individuals, or the intentions of individuals. You have a ton of research on these capabilities, in the U.S. and China, especially, and as a result these techniques are catching on in a way that they weren't five or ten years ago, when we didn't yet have the technology to implement them. We've had the artificial intelligence capabilities for a while---since the late 70's we've been able to write programs that could recognize when a bag had been left by a particular person in a public place. But we didn't have the camera technology or processing technology to roll it out.
Now you have digital cameras, and increased storage and processing capacity, and so you're starting to see these really startling things happening in automated surveillance.
What advantages does automated surveillance have over traditional, human-directed surveillance?
Macnish: The problem with human surveillance is the humans. People get bored; they look away. In many operation centers there will be one person monitoring as many as 50 cameras, and that's not a recipe for accuracy. Science has demonstrated that it's possible for a person to be watching one screen and miss what's happening on it, and so you can imagine watching a busy scene in a mall, and there are 20 people in it, or a field of 50 different screens---you're not going to be able to see what every single person does. You might very well miss the person who puts their bag down and walks off, and that bag might be the one containing the bomb. If you can automate that process, then, in theory, you're removing the weakest link in the chain and you're saving a human being a lot of effort. The other problem with us humans is that we tend to be subject to prejudices. As a result we may focus our attention on people we find attractive, or on people we think are more likely to be terrorists or more likely to be up to no good, and in the mean time we might miss the target we're supposed to be looking for. And this doesn't just happen with terrorists, it can happen with shoplifters too.
On the other hand, we humans have common sense, which is something that computers lack and will probably always lack. For instance, there are computer surveillance programs designed to recognize a person bending down next to a car for a certain period of time, because this is behavior associated with stealing cars. At the moment the processing capacity is such that a computer can recognize a person bending down by a car and staying bent by a car for five seconds, at which point it will send an alert. Now, if a human is watching a person bending down next to a car, they will look to see if they're bending down to pet their dog, or to tie a shoelace, or because they've dropped their keys. The computer isn't going to know that.
In your paper, you describe the way that cultural differences often dictate the way that people move through crowds. For instance, in Saudi Arabia, people walk much slower than they do in London. Another example: in some cultures, people require less personal space than in others. Why are those differences problematic for automated surveillance?
Macnish: The particular automated surveillance I was looking at was designed to measure the distance between people to determine whether or not they were walking together. The theory behind it was that if you and I are walking together through a train station and I put my bag down next to you so that I could go off and get a newspaper or something like that, then clearly the bag is not unattended. This is one of those cases where a human being would instantly recognize that we are walking together and that we are friends, and that the bag isn't a danger, but the computer wouldn't recognize that we were friends. Instead the computer would see an unattended bag and it would send out an alert, and so when I come back from getting my coffee, or my newspaper, I might find you swarmed by security guards, guns drawn. The programmers behind this project were trying to write software that could determine whether two people walking in public are associated with each other in some way, and the way that they did this was to use an algorithm called a "social force model," which looks at how closely people are walking together, how far apart they are, how they interact with nearby objects, and how people walking towards them react to them. Those data points, together, can give you a determination of whether or not people are associated in some way. But problems appear when you consider that different cultural groups have different norms and habits, and that the social and spatial parameters of middle class white guys in the west might be totally different from the social and spatial parameters of two Indian women. There are all these subtle aspects and differences in the way that people from different cultures interact, and there is very little data on how people of different cultures, different sexes, and different ages, walk and act in public. Most of our data is drawn from western middle-class scenarios, things like universities or whatever. It's not the deliberate prejudice that you might see with a camera operator, who might focus on Somalis or Arabs, or some other particular group, but its effects can be just as bad.
Your paper argues for a theory of efficacy, when it comes to surveillance. You seem to say that this can only be ethical if we do it very well.
Macnish: Yes, but it goes deeper than that. My overall project is to argue that the questions that are typically raised in the Just War tradition are the questions that we should be asking about surveillance, in order to see whether or not it (surveillance) is justified. One way of doing that is to question these technologies' chances of success. In Just War theory we have this notion that a war is unethical if you are unlikely to succeed when you enter into it, because it means sending soldiers to die in vain. That was the perspective that I was coming from with the argument about efficacy---if there isn't a considerable chance of success then we shouldn't be pursuing these techniques.
But that rationale, Just War theory, is specific to war and it's specific to war for a very important reason. If we embark on ineffective wars, we run into disastrous consequences with enormous human costs. It's not clear that surveillance ought to have a precautionary principle as strong as the one governing warfare. Why do you think that it should?
Macnish: You have to look at the counterfactual; if we have arbitrary surveillance, which you could argue is what we have in the UK where we have virtually no regulation of CCTV cameras, there is an extent to which you start to wonder why we're being surveilled? Why are we being watched? And the surveillance can have quite an impact on society, it can shape society in ways that that we may not want. If you notice all of this surveillance, and you also notice that it's ineffective, you start to wonder if there's an ulterior motive for it. Heavy surveillance, of which CCTV is only one variety, can create a lot of fear in a population: it creates a sense of vulnerability, a fear of being open to blackmail or other forms of manipulation as a result of what's being recorded by surveillance, and these can, together, create what are typically called chilling effects, where people cease to engage in democratic speech or democratic traditions because they're concerned about what might be discovered about them or said about them. For instance, you might think twice about attending a political demonstration or political meeting if you know you're going to be watched. In the UK, there is a special police unit called FIT (Foreign Intelligence Team) that watches demonstrations, looking for certain trouble makers within political demonstrations---that might dissuade people from going to demonstrate. There is now a response protest group called FIT Watch that is going out to watch the FIT officers who are watching the demonstrators to try meliorate this problem, which is viewed as potentially damaging democratic engagement.
On balance, what about Britain's CCTV System? How does it score in your efficacy framework?
I think it probably fails on most counts. I was thinking about this last night. I've been kind of getting into probes and automated warfare more recently. Boeing is currently working on a drone that can stay in the air for five years without refueling. One that can stay up for 4 days was just successfully tested a couple of days ago. Think about a drone flying above you for five years. If you're in occupied Afghanistan, that is going to be very, very intimidating, and it would be just as intimidating if that were happening in our own country, if there were surveillance drones constantly flying above us. That could feel very intimidating.
Ultimately, there is very little difference between a drone flying above a city and the sort of CCTV surveillance that we have here all the time. It's just one is more out of the ordinary because we're kind of used to it.
You argue that in some ways automated surveillance is less likely to trigger privacy concerns than manual surveillance. Why is that?
Macnish: Say you are taking a shower and a person walks in while you're in the bathroom. You might feel an invasion of privacy, especially if you don't know that person. If a dog walks in, are you going to feel an invasion of privacy? Probably not. I mean there might be some sense of "hey, I don't want this dog looking at me," but it's only a dog. It might be that being watched by a computer is like being watched by the dog; you aren't entirely comfortable with it, but it's better than a human being, a stranger. Now, if it recorded the images it saw and then allowed a human to see those images, then, yes, that would be an invasion of privacy. If it had some automated process where as a result instead of seeing what you do in private, it took some action, that would likewise be an invasion of privacy. But yes, one benefit of automated surveillance is that it can take the human out of the equation, and that can be a net positive for privacy under certain circumstances.
In your paper you argue for a middle ground between manual surveillance and automated surveillance. What does that ideal middle ground look like in the context of something like the CCTV system in the UK?
Macnish: One reason that I argue for a middle ground goes back to the fact that computers don't have much common sense, which can lead to false positives, as we saw with the unattended bag or the person who drops their keys in a parking garage. A computer could be very helpful for filtering out some obvious false positives, but ideally a human should come in to look at what's left. A computer can provide a good filtering mechanism, for purposes of privacy. For instance, a computer could blur out people's faces, or their entire bodies so that a human operator sees only the action in question. At that point, if the action is still deemed suspicious, the operator can specifically request that the image be un-blurred so he can see who the person is and see how he needs to respond to them.
In the context of automated surveillance, does privatization worry you?
Macnish: That's a really interesting question. I think the privatization of creating the software and the hardware in and of itself doesn't bother necessarily me; what concerns me more is the privatization of the operation of the surveillance. So, privatizing the people who are watching the cameras, privatizing what is done with the information from the cameras---when private companies hold that sort of information, especially if they're not regulated, there are all sorts of abuses that could flow from that. There's a second thing that might be worth saying about that as well, and it ties back in with the Arab Spring. After Mubarak fell, when we went into his secret police headquarters, we found all sorts of British, French and American spying equipment, which people like Boeing and whoever else sold to the Libyans and Egyptians knowing very well what would happen with it. Of course there are companies right now that are either still doing, or recently stopped doing the same, for Syria. I think that's a legitimate concern as well.
Video surveillance like CCTV surveillance is only one kind of automated surveillance; automated data surveillance is another. I'm thinking about intelligence organizations looking for patterns in millions of financial transactions and internet searches. Are there overlaps in the ethical issues presented by data surveillance and camera surveillance?
Macnish: Definitely. The same questions that we're asking about CCTV should be asked about data surveillance. Potentially I think that could be very concerning. And that's not just true of intelligence organizations, but of commercial organizations as well. The New York Times recently ran an article about Target and the lengths it would go to know that a 16 year old girl was pregnant---so much so that they knew before her dad did. Those are the kinds of questions commercial organizations are looking to answer. And you have to ask what they do with that information---are they offering better deals to the sort of customers they would rather have as their clientele? Are they trying to put people off who they would rather not have as their clientele? For instance, frequent fliers get all sorts of deals on their flights because they get frequent fliers that spend a lot of money on the airline. Are you creating a situation where the rich, successful people are the ones that get offered better deals to fly on the planes, whereas poorer people don't get those same offers. The questions raised by big data are very interesting. It's actually a very rich area for research; we haven't even scratched the surface of it.
Their peaceful premises and intricate rule systems are changing the way Americans play—and helping shape an industry in the process.
In a development that would have been hard to imagine a generation ago, when video games were poised to take over living rooms, board games are thriving. Overall, the latest available data shows that U.S. sales grew by 28 percent between the spring of 2016 and the spring of 2017. Revenues are expected to rise at a similar rate into the early 2020s—largely, says one analyst, because the target audience “has changed from children to adults,” particularly younger ones.
Much of this success is traceable to the rise of games that, well, get those adults acting somewhat more like children. Clever, low-overhead card games such as Cards Against Humanity, Secret Hitler, and Exploding Kittens (“A card game for people who are into kittens and explosions”) have sold exceptionally well. Games like these have proliferated on Kickstarter, where anyone with a great idea and a contact at an industrial printing company can circumvent the usual toy-and-retail gatekeepers who green-light new concepts. (The largest project category on Kickstarter is “Games,” and board games make up about three-quarters of those projects.)
When the government shuts down, the politicians pipe up.
No sooner had a midnight deadline passed without congressional action on a must-pass spending bill than lawmakers launched their time-honored competition over who gets the blame for their collective failure. The Senate floor became a staging ground for dueling speeches early Saturday morning, and lawmakers of both parties—as well as the White House and political-activist groups—flooded the inboxes of reporters with prewritten statements castigating one side or the other.
Led by President Trump, Republicans accused Senate Democrats of holding hostage the entire government and health insurance for millions of children over their demands for an immigration bill. “This is the behavior of obstructionist losers, not legislators,” the White House said in a statement issued moments before the clock struck midnight. In a series of Saturday-morning tweets, Trump said Democrats had given him “a nice present” for the first anniversary of his inauguration. The White House vowed that no immigration talks would occur while the government is closed, and administration officials sought to minimize public anger by allowing agencies to use leftover funds and by keeping national parks and public lands partially accessible during the shutdown—in effect, by not shutting down the government as fully as the Obama administration did in 2013.
Allegations against the comedian are proof that women are angry, temporarily powerful—and very, very dangerous.
Sexual mores in the West have changed so rapidly over the past 100 years that by the time you reach 50, intimate accounts of commonplace sexual events of the young seem like science fiction: You understand the vocabulary and the sentence structure, but all of the events take place in outer space. You’re just too old.
This was my experience reading the account of one young woman’s alleged sexual encounter with Aziz Ansari, published by the website Babe this weekend. The world in which it constituted an episode of sexual assault was so far from my own two experiences of near date rape (which took place, respectively, during the Carter and Reagan administrations, roughly between the kidnapping of the Iran hostages and the start of the Falklands War) that I just couldn’t pick up the tune. But, like the recent New Yorker story “Cat Person”—about a soulless and disappointing hookup between two people who mostly knew each other through texts—the account has proved deeply resonant and meaningful to a great number of young women, who have responded in large numbers on social media, saying that it is frighteningly and infuriatingly similar to crushing experiences of their own. It is therefore worth reading and, in its way, is an important contribution to the present conversation.
The federal government won’t reopen on Monday, with Democrats rejecting an offer from Senate Majority Leader Mitch McConnell on Sunday night. But an agreement might not be far off.
The federal government will not reopen on Monday morning. On Sunday night, Democratic leaders rejected an offer from Senate Majority Leader Mitch McConnell to consider immigration legislation in the next three weeks if they agreed to end the shutdown.
A large bipartisan group representing more than one-fifth of the Senate had been working throughout the weekend to resolve, at least temporarily, the stalemate that shut down the government on Saturday. Their goal was to nip the shutdown in the bud, avoiding the need to furlough hundreds of thousands of federal workers on Monday morning.
But shortly after 9 p.m Eastern, Senate Minority Leader Charles Schumer rebuffed McConnell’s attempt to vote on a bill that would have restored federal funding for three weeks and kept the government open while party leaders negotiated a much broader agreement encompassing the budget, disaster aid, children’s health care, and most delicately, the fate of nearly 700,000 young immigrants whose protections from deportation are set to end in early March. “Talks will continue,” Schumer said, “but we have yet to reach an agreement on a path forward that would be acceptable to both sides.”
When truth itself feels uncertain, how can a democracy be sustained?
“In God We Trust,” goes the motto of the United States. In God, and apparently little else.
Only a third of Americans now trust their government “to do what is right”—a decline of 14 percentage points from last year, according to a new report by the communications marketing firm Edelman. Forty-two percent trust the media, relative to 47 percent a year ago. Trust in business and non-governmental organizations, while somewhat higher than trust in government and the media, decreased by 10 and nine percentage points, respectively. Edelman, which for 18 years has been asking people around the world about their level of trust in various institutions, has never before recorded such steep drops in trust in the United States.
As he enters what may be his final years as the leader of Palestine, he appears poised to duplicate the mistakes of Arafat.
Picture a Palestinian leader in the twilight of his reign. Besieged on all sides and challenged by younger upstarts, he lashes out against Israel, his Arab brethren, and the United States. Other Palestinian officials jockey to replace him, convinced he’s past his prime. This is how it ended for Yasser Arafat, whose insistence on waging the second intifada left him isolated in the final years of his rule. It may well be how it ends for Mahmoud Abbas.
Last Sunday, the 82-year-old Abbas, the president of the Palestinian Authority, gave a speech in front of the Palestine Liberation Organization’s Central Council. Over two rambling hours, he deployed anti-Semitic tropes, undercut the Jewish connection to Israel, and blamed everyone from Oliver Cromwell to Napoleon to Winston Churchill for Israel’s creation. He repeatedly cursed President Donald Trump (“may your house fall into ruin”); he has also said he will boycott Vice President Mike Pence’s upcoming visit. He issued indirect rebukes of Arab leaders (“no one has the right to interfere with our affairs”) after days of reportedlyconfrontational meetings with other Gulf officials (“if [they] really want to help the Palestinian people, support us, and give us a real hand. If not, you can all go to hell”).
Corporate goliaths are taking over the U.S. economy. Yet small breweries are thriving. Why?
The monopolies are coming. In almost every economic sector, including television, books, music, groceries, pharmacies, and advertising, a handful of companies control a prodigious share of the market.
The beer industry has been one of the worst offenders. The refreshing simplicity of Blue Moon, the vanilla smoothness of Boddingtons, the classic brightness of a Pilsner Urquell, and the bourbon-barrel stouts of Goose Island—all are owned by two companies: Anheuser-Busch InBev and MillerCoors. As recently as 2012, this duopoly controlled nearly 90 percent of beer production.
This sort of industry consolidation troubles economists. Research has found that the existence of corporate behemoths stamps out innovation and hurts workers. Indeed, between 2002 and 2007, employment at breweries actually declined in the midst of an economic expansion.
The website made a name for itself by going after Aziz Ansari, and now it’s hurting the momentum of #MeToo.
Fifteen years ago, Hollywood’s glittering superstars—among them Meryl Streep— were on their feet cheering for Roman Polanski, the convicted child rapist and fugitive from justice, when he won the 2003 Academy Award for Best Director. But famous sex criminals of the motion picture and television arts have lately fallen out of fashion, as the industry attempts not just to police itself but—where would we be without them?—to instruct all of us on how to lead our lives.
The Golden Globes ceremony had the angry, unofficial theme of “Time’s Up,” which quickly and predictably became unmoored from its original meaning, as excited winners tried to align their entertaining movies and TV shows with the message. By the time Laura Dern—a quiver in her voice—connected the nighttime soap opera Big Little Lies to America’s need to institute “restorative justice,” it seemed we’d set a course for the moon but ended up on Jupiter: close, but still 300 million miles away. And then Oprah Winfrey climbed the stairs to the stage, and I knew she wouldn’t just bat clean-up; she’d bring home the pennant.
More comfortable online than out partying, post-Millennials are safer, physically, than adolescents have ever been. But they’re on the brink of a mental-health crisis.
One day last summer, around noon, I called Athena, a 13-year-old who lives in Houston, Texas. She answered her phone—she’s had an iPhone since she was 11—sounding as if she’d just woken up. We chatted about her favorite songs and TV shows, and I asked her what she likes to do with her friends. “We go to the mall,” she said. “Do your parents drop you off?,” I asked, recalling my own middle-school days, in the 1980s, when I’d enjoy a few parent-free hours shopping with my friends. “No—I go with my family,” she replied. “We’ll go with my mom and brothers and walk a little behind them. I just have to tell my mom where we’re going. I have to check in every hour or every 30 minutes.”
Those mall trips are infrequent—about once a month. More often, Athena and her friends spend time together on their phones, unchaperoned. Unlike the teens of my generation, who might have spent an evening tying up the family landline with gossip, they talk on Snapchat, the smartphone app that allows users to send pictures and videos that quickly disappear. They make sure to keep up their Snapstreaks, which show how many days in a row they have Snapchatted with each other. Sometimes they save screenshots of particularly ridiculous pictures of friends. “It’s good blackmail,” Athena said. (Because she’s a minor, I’m not using her real name.) She told me she’d spent most of the summer hanging out alone in her room with her phone. That’s just the way her generation is, she said. “We didn’t have a choice to know any life without iPads or iPhones. I think we like our phones more than we like actual people.”
The school hopes reshaping how young people enter classrooms will keep them there longer.
When Kia Turner began college, she didn’t plan on a career as a public-school teacher. “I came into college thinking I was going to go into corporate law,” said 22-year-old Turner, who graduated from Harvard this spring. But after working at an after-school program, “I kind of realized I wanted to spend my time working for kids.”
So instead of heading to law school this fall, she’ll be teaching constitutional law to a group of 10th graders at Brooklyn’s Urban Assembly School for Law and Justice, one of eight recent Harvard graduates who will step into New York City classrooms as part of a teacher-preparation program launching this year.