Automated surveillance allows governments (and others) to data mine the physical world, yet little attention has been paid to the ethics of perpetual recording.
Hello, human, I'm here to see you (MGM).
Over the past decade, video surveillance has exploded. In many cities, we might as well have drones hovering overhead, given how closely we're being watched, perpetually, by the thousands of cameras perched on buildings. So far, people's inability to watch the millions of hours of video had limited its uses. But video is data and computers are being set to work mining that information on behalf of governments and anyone else who can afford the software. And this kind of automated surveillance is only going to get more sophisticated as a result of new technologies like iris scanners and gait analysis.
Yet little thought has been given to the ethics of perpetually recording vast swaths of the world. What, exactly, are we getting ourselves into?
The New Aesthetic isn't just a cool art project; machines really are watching us, and they have their own way of seeing; they make mistakes that humans don't. Before automated surveillance reaches a critical mass, we are going to have to think carefully about whether we think its security benefits are worth the human costs it imposes. The ethical issues go beyond just video; think about data surveillance, about algorithms that can mine your financial history or your internet searches for patterns that could suggest you're an aspiring terrorist. You'd want to be sure that a technology like that was accurate.
Fortunately, our British friends are slightly ahead of the curve when it comes to thinking through the dilemmas posed by ubiquitous electronic surveillance. As a result of an interesting and contingent set of historical circumstances, the British now live under the watchful eye of a massive video surveillance system. British philosophers are starting to gaze back at the CCTV cameras watching them, and they're starting to demand that those cameras justify their existence. In a new paper called The Unblinking Eye: The Ethics of Automating Surveillance, philosopher Kevin Macnish argues that the political and cultural costs of excessive surveillance could be so great that we ought to be as hesitant about using it as as we are about warfare. That is to say, we ought to limit automated surveillance to those circumstances where we know it to be extremely effective. I spoke to Macnish about his theory, and about how technology is changing surveillance, for better and for worse.
I was thinking the other day that it's curious that CCTV should have bloomed in Britain, whose population we think of as being less security-crazed than the population of the United States. British is more urban than America, but it can't just be that, can it?
Macnish: One interesting historical point, and I don't think this clarifies the whole thing but it helps, is that most other western countries have a recent history of some form of dictatorship, the US exempted. Most of the Europe was under a dictator or occupied by a dictatorship within the living memory, and so I think there is an awareness there about the dangers of government. It's possible that Britain might be a little bit more laissez-fare about surveillance because we haven't had that level of autocratic control since the 17th century. I think in America, while the history is a little bit different, you have a very strong social consciousness about separation of powers within the state, and between the state and the people. I think there is a general suspicion of the state in America, which we often just don't have in the U.K.
Then you have to couple that with some very powerful images. In 1993 there was an infamous case of a 2 year-old named James Bulger who was kidnapped by two other children who were themselves about 10 or 11. They kidnapped him and then killed him in a very horrible way that mimicked a murder from one of the Child's Play films, which led to a massive reaction against horror films and whatever else. At the time there was a CCTV image taken of the two boys picking up this toddler and walking off with him, while holding his hand. Ironically, the CCTV didn't actually help with solving the case. The police had already heard about the case of these two boys and were already investigating them, but the image came across on our TV screens and came into our newspapers and it was really powerful. That helped to favor people towards CCTV here. It hadn't been thoroughly researched at the time and it was sort of suspected at a common sense level that it would help deter crime, and that it would detect and catch criminals, and that it would be able to help to find lost children. So, the government poured hundreds and millions of pounds into CCTV cameras all around the country and then retailers and businesses bought CCTV cameras for their own security---it just took off. As a sociological study, it's fascinating. A lot of my American friends that come here feel really freaked out by the amount of cameras we have, and with good reason.
What is automated surveillance? Where and how is it most commonly used? I know the Chinese have been developing a kind of gait analysis, a way to identify people on video based on the length and speed of their stride. In what other ways is this technology gathering steam?
Macnish: There are things like iris recognition, there are areas where people are looking at parts of the face for identification purposes; there are all of these ways that you can now automate the recognition of individuals, or the intentions of individuals. You have a ton of research on these capabilities, in the U.S. and China, especially, and as a result these techniques are catching on in a way that they weren't five or ten years ago, when we didn't yet have the technology to implement them. We've had the artificial intelligence capabilities for a while---since the late 70's we've been able to write programs that could recognize when a bag had been left by a particular person in a public place. But we didn't have the camera technology or processing technology to roll it out.
Now you have digital cameras, and increased storage and processing capacity, and so you're starting to see these really startling things happening in automated surveillance.
What advantages does automated surveillance have over traditional, human-directed surveillance?
Macnish: The problem with human surveillance is the humans. People get bored; they look away. In many operation centers there will be one person monitoring as many as 50 cameras, and that's not a recipe for accuracy. Science has demonstrated that it's possible for a person to be watching one screen and miss what's happening on it, and so you can imagine watching a busy scene in a mall, and there are 20 people in it, or a field of 50 different screens---you're not going to be able to see what every single person does. You might very well miss the person who puts their bag down and walks off, and that bag might be the one containing the bomb. If you can automate that process, then, in theory, you're removing the weakest link in the chain and you're saving a human being a lot of effort. The other problem with us humans is that we tend to be subject to prejudices. As a result we may focus our attention on people we find attractive, or on people we think are more likely to be terrorists or more likely to be up to no good, and in the mean time we might miss the target we're supposed to be looking for. And this doesn't just happen with terrorists, it can happen with shoplifters too.
On the other hand, we humans have common sense, which is something that computers lack and will probably always lack. For instance, there are computer surveillance programs designed to recognize a person bending down next to a car for a certain period of time, because this is behavior associated with stealing cars. At the moment the processing capacity is such that a computer can recognize a person bending down by a car and staying bent by a car for five seconds, at which point it will send an alert. Now, if a human is watching a person bending down next to a car, they will look to see if they're bending down to pet their dog, or to tie a shoelace, or because they've dropped their keys. The computer isn't going to know that.
In your paper, you describe the way that cultural differences often dictate the way that people move through crowds. For instance, in Saudi Arabia, people walk much slower than they do in London. Another example: in some cultures, people require less personal space than in others. Why are those differences problematic for automated surveillance?
Macnish: The particular automated surveillance I was looking at was designed to measure the distance between people to determine whether or not they were walking together. The theory behind it was that if you and I are walking together through a train station and I put my bag down next to you so that I could go off and get a newspaper or something like that, then clearly the bag is not unattended. This is one of those cases where a human being would instantly recognize that we are walking together and that we are friends, and that the bag isn't a danger, but the computer wouldn't recognize that we were friends. Instead the computer would see an unattended bag and it would send out an alert, and so when I come back from getting my coffee, or my newspaper, I might find you swarmed by security guards, guns drawn. The programmers behind this project were trying to write software that could determine whether two people walking in public are associated with each other in some way, and the way that they did this was to use an algorithm called a "social force model," which looks at how closely people are walking together, how far apart they are, how they interact with nearby objects, and how people walking towards them react to them. Those data points, together, can give you a determination of whether or not people are associated in some way. But problems appear when you consider that different cultural groups have different norms and habits, and that the social and spatial parameters of middle class white guys in the west might be totally different from the social and spatial parameters of two Indian women. There are all these subtle aspects and differences in the way that people from different cultures interact, and there is very little data on how people of different cultures, different sexes, and different ages, walk and act in public. Most of our data is drawn from western middle-class scenarios, things like universities or whatever. It's not the deliberate prejudice that you might see with a camera operator, who might focus on Somalis or Arabs, or some other particular group, but its effects can be just as bad.
Your paper argues for a theory of efficacy, when it comes to surveillance. You seem to say that this can only be ethical if we do it very well.
Macnish: Yes, but it goes deeper than that. My overall project is to argue that the questions that are typically raised in the Just War tradition are the questions that we should be asking about surveillance, in order to see whether or not it (surveillance) is justified. One way of doing that is to question these technologies' chances of success. In Just War theory we have this notion that a war is unethical if you are unlikely to succeed when you enter into it, because it means sending soldiers to die in vain. That was the perspective that I was coming from with the argument about efficacy---if there isn't a considerable chance of success then we shouldn't be pursuing these techniques.
But that rationale, Just War theory, is specific to war and it's specific to war for a very important reason. If we embark on ineffective wars, we run into disastrous consequences with enormous human costs. It's not clear that surveillance ought to have a precautionary principle as strong as the one governing warfare. Why do you think that it should?
Macnish: You have to look at the counterfactual; if we have arbitrary surveillance, which you could argue is what we have in the UK where we have virtually no regulation of CCTV cameras, there is an extent to which you start to wonder why we're being surveilled? Why are we being watched? And the surveillance can have quite an impact on society, it can shape society in ways that that we may not want. If you notice all of this surveillance, and you also notice that it's ineffective, you start to wonder if there's an ulterior motive for it. Heavy surveillance, of which CCTV is only one variety, can create a lot of fear in a population: it creates a sense of vulnerability, a fear of being open to blackmail or other forms of manipulation as a result of what's being recorded by surveillance, and these can, together, create what are typically called chilling effects, where people cease to engage in democratic speech or democratic traditions because they're concerned about what might be discovered about them or said about them. For instance, you might think twice about attending a political demonstration or political meeting if you know you're going to be watched. In the UK, there is a special police unit called FIT (Foreign Intelligence Team) that watches demonstrations, looking for certain trouble makers within political demonstrations---that might dissuade people from going to demonstrate. There is now a response protest group called FIT Watch that is going out to watch the FIT officers who are watching the demonstrators to try meliorate this problem, which is viewed as potentially damaging democratic engagement.
On balance, what about Britain's CCTV System? How does it score in your efficacy framework?
I think it probably fails on most counts. I was thinking about this last night. I've been kind of getting into probes and automated warfare more recently. Boeing is currently working on a drone that can stay in the air for five years without refueling. One that can stay up for 4 days was just successfully tested a couple of days ago. Think about a drone flying above you for five years. If you're in occupied Afghanistan, that is going to be very, very intimidating, and it would be just as intimidating if that were happening in our own country, if there were surveillance drones constantly flying above us. That could feel very intimidating.
Ultimately, there is very little difference between a drone flying above a city and the sort of CCTV surveillance that we have here all the time. It's just one is more out of the ordinary because we're kind of used to it.
You argue that in some ways automated surveillance is less likely to trigger privacy concerns than manual surveillance. Why is that?
Macnish: Say you are taking a shower and a person walks in while you're in the bathroom. You might feel an invasion of privacy, especially if you don't know that person. If a dog walks in, are you going to feel an invasion of privacy? Probably not. I mean there might be some sense of "hey, I don't want this dog looking at me," but it's only a dog. It might be that being watched by a computer is like being watched by the dog; you aren't entirely comfortable with it, but it's better than a human being, a stranger. Now, if it recorded the images it saw and then allowed a human to see those images, then, yes, that would be an invasion of privacy. If it had some automated process where as a result instead of seeing what you do in private, it took some action, that would likewise be an invasion of privacy. But yes, one benefit of automated surveillance is that it can take the human out of the equation, and that can be a net positive for privacy under certain circumstances.
In your paper you argue for a middle ground between manual surveillance and automated surveillance. What does that ideal middle ground look like in the context of something like the CCTV system in the UK?
Macnish: One reason that I argue for a middle ground goes back to the fact that computers don't have much common sense, which can lead to false positives, as we saw with the unattended bag or the person who drops their keys in a parking garage. A computer could be very helpful for filtering out some obvious false positives, but ideally a human should come in to look at what's left. A computer can provide a good filtering mechanism, for purposes of privacy. For instance, a computer could blur out people's faces, or their entire bodies so that a human operator sees only the action in question. At that point, if the action is still deemed suspicious, the operator can specifically request that the image be un-blurred so he can see who the person is and see how he needs to respond to them.
In the context of automated surveillance, does privatization worry you?
Macnish: That's a really interesting question. I think the privatization of creating the software and the hardware in and of itself doesn't bother necessarily me; what concerns me more is the privatization of the operation of the surveillance. So, privatizing the people who are watching the cameras, privatizing what is done with the information from the cameras---when private companies hold that sort of information, especially if they're not regulated, there are all sorts of abuses that could flow from that. There's a second thing that might be worth saying about that as well, and it ties back in with the Arab Spring. After Mubarak fell, when we went into his secret police headquarters, we found all sorts of British, French and American spying equipment, which people like Boeing and whoever else sold to the Libyans and Egyptians knowing very well what would happen with it. Of course there are companies right now that are either still doing, or recently stopped doing the same, for Syria. I think that's a legitimate concern as well.
Video surveillance like CCTV surveillance is only one kind of automated surveillance; automated data surveillance is another. I'm thinking about intelligence organizations looking for patterns in millions of financial transactions and internet searches. Are there overlaps in the ethical issues presented by data surveillance and camera surveillance?
Macnish: Definitely. The same questions that we're asking about CCTV should be asked about data surveillance. Potentially I think that could be very concerning. And that's not just true of intelligence organizations, but of commercial organizations as well. The New York Times recently ran an article about Target and the lengths it would go to know that a 16 year old girl was pregnant---so much so that they knew before her dad did. Those are the kinds of questions commercial organizations are looking to answer. And you have to ask what they do with that information---are they offering better deals to the sort of customers they would rather have as their clientele? Are they trying to put people off who they would rather not have as their clientele? For instance, frequent fliers get all sorts of deals on their flights because they get frequent fliers that spend a lot of money on the airline. Are you creating a situation where the rich, successful people are the ones that get offered better deals to fly on the planes, whereas poorer people don't get those same offers. The questions raised by big data are very interesting. It's actually a very rich area for research; we haven't even scratched the surface of it.
Why haven’t more challengers entered the race to defeat the Iraq War hawk, Patriot Act supporter, and close friend of big finance?
As Hillary Clinton loses ground to Bernie Sanders in Iowa, where her lead shrinks by the day, it’s worth noticing that she has never made particular sense as the Democratic Party’s nominee. She may be more electable than her social-democratic rival from Vermont, but plenty of Democrats are better positioned to represent the center-left coalition. Why have they let the former secretary of state keep them out of the race? If Clinton makes it to the general election, I understand why most Democrats will support her. She shares their views on issues as varied as preserving Obamacare, abortion rights, extending legal status to undocumented workers, strengthening labor unions, and imposing a carbon tax to slow climate change.
In the name of emotional well-being, college students are increasingly demanding protection from words and ideas they don’t like. Here’s why that’s disastrous for education—and mental health.
Something strange is happening at America’s colleges and universities. A movement is arising, undirected and driven largely by students, to scrub campuses clean of words, ideas, and subjects that might cause discomfort or give offense. Last December, Jeannie Suk wrote in an online article for The New Yorker about law students asking her fellow professors at Harvard not to teach rape law—or, in one case, even use the word violate (as in “that violates the law”) lest it cause students distress. In February, Laura Kipnis, a professor at Northwestern University, wrote an essay in The Chronicle of Higher Education describing a new campus politics of sexual paranoia—and was then subjected to a long investigation after students who were offended by the article and by a tweet she’d sent filed Title IX complaints against her. In June, a professor protecting himself with a pseudonym wrote an essay for Vox describing how gingerly he now has to teach. “I’m a Liberal Professor, and My Liberal Students Terrify Me,” the headline said. A number of popular comedians, including Chris Rock, have stopped performing on college campuses (see Caitlin Flanagan’s article in this month’s issue). Jerry Seinfeld and Bill Maher have publicly condemned the oversensitivity of college students, saying too many of them can’t take a joke.
Though it wasn’t pretty, Minaj was really teaching a lesson in civility.
Nicki Minaj didn’t, in the end, say much to Miley Cyrus at all. If you only read the comments that lit up the Internet at last night’s MTV Video Music Awards, you might think she was kidding, or got cut off, when she “called out” the former Disney star who was hosting: “And now, back to this bitch that had a lot to say about me the other day in the press. Miley, what’s good?”
To summarize: When Minaj’s “Anaconda” won the award for Best Hip-Hop Video, she took to the stage in a slow shuffle, shook her booty with presenter Rebel Wilson, and then gave an acceptance speech in which she switched vocal personas as amusingly as she does in her best raps—street-preacher-like when telling women “don’t you be out here depending on these little snotty-nosed boys”; sweetness and light when thanking her fans and pastor. Then a wave of nausea seemed to come over her, and she turned her gaze toward Cyrus. To me, the look on her face, not the words that she said, was the news of the night:
Many educators are introducing meditation into the classroom as a means of improving kids’ attention and emotional regulation.
A five-minute walk from the rickety, raised track that carries the 5 train through the Bronx, the English teacher Argos Gonzalez balanced a rounded metal bowl on an outstretched palm. His class—a mix of black and Hispanic students in their late teens, most of whom live in one of the poorest districts in New York City—by now were used to the sight of this unusual object: a Tibetan meditation bell.
“Today we’re going to talk about mindfulness of emotion,” Gonzalez said with a hint of a Venezuelan accent. “You guys remember what mindfulness is?” Met with quiet stares, Gonzalez gestured to one of the posters pasted at the back of the classroom, where the students a few weeks earlier had brainstormed terms describing the meaning of “mindfulness.” There were some tentative mumblings: “being focused,” “being aware of our surroundings.”
If the Fourteenth Amendment means that the children of undocumented immigrants are not citizens, as Donald Trump suggests, then they are also not subject to American laws.
Imagine the moon rising majestically over the Tonto National Forest, highlighting the stark desert scenery along the Superstition Freeway just west of Morristown, Arizona. The sheriff of Maricopa County sips coffee from his thermos and checks that his radar gun is on the ready. A lot of lawmen wouldn’t have bothered to send officers out at night on such a lonely stretch of road, much less taken the night shift themselves. But America’s Toughest Sheriff sets a good example for his deputies. As long as he’s the sheriff, at least, the rule of law—and the original intent of the Constitution—will be enforced by the working end of a nightstick.
Suddenly a car rockets by, going 100 miles an hour by the gun. Siren ululating, the sheriff heads west after the speeder. The blue Corolla smoothly pulls over to the shoulder. The sheriff sees the driver’s side window roll down. Cautiously he approaches.
After calling his intellectual opponents treasonous, and allegedly exaggerating his credentials, a controversial law professor resigns from the United States Military Academy.
On Monday, West Point law professor William C. Bradford resigned after The Guardianreported that he had allegedly inflated his academic credentials. Bradford made headlines last week, when the editors of the National Security Law Journaldenounced a controversial article by him in their own summer issue:
As the incoming Editorial Board, we want to address concerns regarding Mr. Bradford’s contention that some scholars in legal academia could be considered as constituting a fifth column in the war against terror; his interpretation is that those scholars could be targeted as unlawful combatants. The substance of Mr. Bradford’s article cannot fairly be considered apart from the egregious breach of professional decorum that it exhibits. We cannot “unpublish” it, of course, but we can and do acknowledge that the article was not presentable for publication when we published it, and that we therefore repudiate it with sincere apologies to our readers.
Meaningful work, argues psychologist Barry Schwartz, shouldn't be a luxury. It should be a feature of every job, from CEO to factory worker.
There’s a belief that what gets some workers to keep coming into work every day is their “psychic wages”—the fulfillment that comes with doing meaningful work. That thinking is usually applied to authors, or doctors, or social workers, but the assumption for why a different class of workers—janitors, factory workers, call-center employees—keeps showing up every day is often simpler: They aren’t there for anything but money.
But Barry Schwartz, a professor of psychology at Swarthmore College, believes that jobs are about more than money, for both blue- and white-collar workers alike. When he was trained as a psychologist, decades ago, the thinking of B. F. Skinner—of Skinner Box fame—dominated the field. Skinner’s view of human nature was that every action can be explained through the lens of rewards and punishment: If someone wasn’t doing something, he or she simply wasn’t getting a sufficient reward for it. “And that always struck me as wrong—at least, as a description of human beings, as incomplete,” Schwartz told me.
When cobbling together a livable income, many of America’s poorest people rely on the stipends they receive for donating plasma.
There is no money to be made selling blood anymore. It can, however, pay off to sell plasma, a component in blood that is used in a number of treatments for serious illnesses. It is legal to “donate” plasma up to two times a week, for which a bank will pay around $30 each time. Selling plasma is so common among America’s extremely poor that it can be thought of as their lifeblood.
But no one could reasonably think of a twice-weekly plasma donation as a job. It’s a survival strategy, one of many operating well outside the low-wage job market.
In Johnson City, Tennessee, we met a 21-year-old who donates plasma as often as 10 times a month—as frequently as the law allows. (The terms of our research prevent us from revealing her identity.) She is able to donate only when her husband has time to keep an eye on their two young daughters. When we met him in February, he could do that pretty frequently because he’d been out of work since the beginning of December, when McDonald’s reduced his hours to zero in response to slow foot traffic. Six months ago, walking his wife to the plasma clinic and back, kids in tow, was the most important job he had.
The top lobbyist for the agreement, along with John Kerry’s former chief of staff, answer a prominent critic’s questions for President Obama.
Last week, I posted a series of sharp, critical questions addressed to President Obama from Robert Satloff, the executive director of the Washington Institute for Near East Policy, about the Iran nuclear deal. A number of administration surrogates have subsequently offered me their own answers to Satloff’s questions. (I’ve asked administration officials to answer the questions as well, but I’m not sure the White House is gripped by the same sense of urgency to answer such questions as it once was, considering that Obama is on a clear path to victory now against his congressional opponents.)
But two people particularly relevant to this debate—Jeremy Ben-Ami, the head of J Street, the pro-Obama, anti-Netanyahu Jewish organization, who is also the de facto chief pro-deal lobbyist inside (and outside) the Jewish community; and David Wade, John Kerry’s former chief of staff, who is now helping J Street in its pro-deal campaign—have offered me lengthy answers, and I’m publishing them below. I might do one more round of this by asking an opponent of the deal to respond to Ben-Ami and Wade. (Satloff hasn’t actually come out against the deal.)
Cultural anthropology can help explain why the downturn caught everyone by surprise: Experts around the world tend to focus on the same mathematical models, looking for patterns in the same limited number of places.
The Queen of England was standing in a hall at the London School of Economics looking a little perplexed. The date was November 4, 2008, and she had arrived to open a new building on campus. Cheering crowds of tourists, students, and children lined the narrow streets, waving Union Jacks, as she arrived.
The event was supposed to be a celebration of academic achievement, but the timing was poignant. Two months earlier, the financial crisis had erupted in London and many other parts of the West, leaving hordes of economists and pundits scurrying to provide analysis. As the Queen toured the building, Luis Garicano, one highly regarded economist, presented her with some charts that purported to show what was going on in finance.