Toward a Complex, Realistic, and Moral Tech Criticism

Evgeny Morozov's second book is a brilliant, confounding work of creative destruction.

Evgeny Morozov delivering a lecture sponsored by Stanford's Program on Liberation Technology (Alexis Madrigal) in January, 2013.

What critics of literature do not both love and hate the subject of their scholarship? The very strength of a critic's love is what inspires such dogged meaning- and fault-finding in the reality of any work.

This is also true for writer and thinker Evgeny Morozov, though it is not literature but technology that must bear the privilege of his evisceration. His books read like letters from a jilted lover, full of accusations of unmet promises, lost potential, and occasionally, a glimmer of that initial spark of attraction.

And he is a truly great critic. Morozov's work reveals new things about how technology works in our society at this particular moment in time. His analysis may be cutting, but he doesn't hate technology. On the contrary, Morozov's ultimate goal is to destroy the ideology of technology, so that particular technologies can be used in specific situations without the baggage of other people's nonsense.

Morozov's second book, To Save Everything, Click Here: The Folly of Technological Solutionism, is the most wide-ranging and generative critique of digital technology I've ever read. There's so much substance to argue about between its covers. At the center of it all, there's a brilliant, idiosyncratic mind at work.

Describing and destroying two concepts -- "Internet-centrism" and "solutionism" -- form the core of his book, and both are fascinating frames for the discourse surrounding our network technologies.

Internet-centrism is the idea that our society, and particularly its public intellectuals, have become fascinated by the notion that the Internet is a stable and coherent force in our lives. He rails against the idea that this force shapes things autonomously, or that it has any inherent qualities, or that we have to listen to what "the Internet" wants on a topic like openness, for example. Morozov's goal is to force everyone to write the Internet with quotes -- like this: "the Internet." This, he feels, better implies the complexities of the Internet's social creation and casts doubt on its power as an independent force with its own ahistorical rules.

His analysis here is a full-frontal attack on the shorthand thinking that's come to dominate many discussions about the role of digital technologies in the world. It's a valuable contribution in many ways; he demands that we think seriously about the Internet, I mean, "the Internet." I do think that Morozov has succeeded in doing a lot of damage to the idea that "'the Internet' is a useful analytical category." And to perform a deconstruction in public and for a general reader is a feat of magic that borders on necromancy. Who knew people still wanted to read books like this?

Morozov's "solutionism" is something else altogether. In it, he's identified a key strain of modern political and social thought, synthesizing a wide variety of domains, technologies, and types of arguments into something we can ponder and argue about. I find myself coming back to this idea time and again while listening to advocates and opponents of particular technologies. I would not be surprised if describing the contours, origins, and failings of this way of thought are what Morozov is remembered for. I think it will become the concept that generates its own set of literature. He writes:

Recasting all complex social situations either as neat problems with definite, computable solutions or as transparent and self-evident processes that can be easily optimized--if only the right algorithms are in place!--this quest is likely to have unexpected consequences that could eventually cause more damage than the problems they seek to address. I call the ideology that legitimizes and sanctions such aspirations "solutionism." I borrow this unabashedly pejorative term from the world of architecture and urban planning, where it has come to refer to an unhealthy preoccupation with sexy, monumental, and narrow-minded solutions--the kind of stuff that wows audiences at TED Conferences--to problems that are extremely complex, fluid, and contentious. These are the kinds of problems that, on careful examination, do not have to be defined in the singular and all-encompassing ways that "solutionists" have defined them; what's contentious, then, is not their proposed solution but their very definition of the problem itself. Design theorist Michael Dobbins has it right: solutionism presumes rather than investigates the problems that it is trying to solve, reaching "for the answer before the questions have been fully asked." How problems are composed matters every bit as much as how problems are.

This analysis, which runs throughout the entire book, is really, really interesting. I'm going to get into the details soon, but my main worry is that solutionism, even accepting Morozov's framing, contains some elements worth preserving. Indeed, there is a reading of this book (an unkind one, for sure) that finds it deeply anti-progressive and almost frighteningly supportive of the status quo in politics and elsewhere.

All of which makes the book a delight: It's a high-wire performance, a feat of intellectual daring. He goes to war with almost everybody else who thinks about the Internet's people, institutions, and technologies in the public eye: Nicholas Carr, Clay Shirky, David Weinberger, Tim Wu, Kevin Kelly, Farhad Manjoo, Steven Johnson, Gary Wolf, among others. Sometimes, he wins easily. Sometimes, he tangles himself into knots trying to defeat every possible enemy and defend against every possible counterargument. In all cases, he is worth reading, even if you vehemently disagree.

The only comparable experience I've ever had was reading Vaclav Smil on energy: Frustrating, enlightening, and counterintuitive in the best meaning of the word. Of course, if you're interested in the Internet, you should read it. And I think historians in 2030 or 2050 will use this book to highlight the anxieties and debates of our time, pretty much all of which it attempts to tackle at once.

On the other hand, I'm not sure that they'll read it as a fair or grounded representation of the state of technology. Morozov's mode, while learned and theoretically grounded, is not as deeply authoritative as it appears. There is little actual evidence that many of the phenomena he highlights are actually occurring in the way he says they are. Granted, that's not his focus. I recognize fully that Morozov's project is in the realm of ideas, ideology, and the sociology of knowledge.

But there's often not even an attempt to line up reality with his anecdotes and projections. He relies time and again on scenarios, little flights of fancy, that are neither thought experiment nor forecast, but something more opaque. Even within the logic of the book itself, it's difficult to compare his scenarios to one another. There's little consistency among them in terms of plausibility or time-scale. If you look closely, you're left wondering: is this something that is already happening, might happen in a year, could happen in 10 years, or is a logical possibility in a century? Quite-close-to-real scenarios are delivered with the same rhetorical weight as truly wild Morozovian nightmares.

Let me give just a few examples.

Here's one section from Morozov's chapter on predictive policing in which he introduces a real product, ShotSpotter, a microphone sensor system that lets police (in Oakland, say) identify where gunshots are fired. Watch how he slides from there to a much stranger idea without blinking:

These systems are not cheap--ShotSpotter reportedly charges $40,000 to $60,000 a year per square mile--but they are hardly the latest word in crime detection. Why bother with expensive microphones if smartphones can do the job just fine? It all boils down to designing an appealing and nonintrusive app and creating the right incentives--perhaps by appealing to the moral conscience of citizens or by turning crime reports into a game--so that citizens can take on some of the tasks of faulty sensors and easily distracted human.

From an actual, deployed system to a nightmarish sousveillance scenario in one sentence. Could such a system work? Would this be appealing to institutional players or people? Why even bother, if you're the cops? Is anyone even thinking about doing this in one, five, or even 20 years? What gives him the idea this might happen? I don't know. There's certainly nothing in the quite extensive (and welcome) footnotes to explain this leap.

When police talk about predictive policing, they're talking about putting cops in the areas most likely to experience a crime. That's actually a far cry from "preventing" crime. In practice, the Los Angeles Police Department, which Morozov uses as his example, only has officers to patrol a tiny percentage of the city, even in the zones where a model might say crime is most likely to occur. The institutional reality of the LAPD is that they could never prevent a substantial percentage of all crimes, even if they knew precisely, not probabilistically, where such activity might occur. The same is true for every police department. So how likely is it that we'd prevent all crime, as Morozov impishly suggests throughout the chapter? It's not that I expect him to deal in the probabilities, but to couch his criticisms within a realistic framework.

Or how likely is it that tweeting about yogurt will bring police to your door? Is it more or less likely than the other scenarios Morozov discusses?

As companies like ECM Universe accumulate extensive archives of tweets and Facebook updates sent by actual criminals, they will also be able to predict the kinds of nonthreatening verbal cues that tend to precede criminal acts. Thus, even tweeting that you don't like your yogurt might bring police to your door, especially if someone who tweeted the same thing three years before ended up shooting someone in the face later in the day.

Or that acts of civil disobedience will become impossible, as in this remarkable bit of short fiction?

Now, imagine that [Rosa] Parks is riding one of the smart buses of the near future. Equipped with sensors that know how many passengers are waiting at the nearest stop, the bus can calculate the exact number of African Americans it can transport without triggering conflict; those passengers who won't be able to board or find a seat are sent polite text messages informing them of future pickups. A smart facial-recognition scheme--powered by video cameras at bus stops--keeps count of how many people of each race are waiting to board and divides the bus into two white and black sections accordingly. The bus driver--if there still is one--can tap into a big-data computer portal that, much like predictive software for police, produces historical estimates of how many black people are likely to be riding that day and calculates the odds of racial tension based on the weather, what's in the news, and the social-networking profiles of specific people at the bus stop. Those passengers most likely to cause tension on board are simply denied entry. Will this new transportation system be convenient? Sure. Will it give us a Rosa Parks? Probably not, because she would never have gotten to the front of the bus to begin with. The odds are that a perfectly efficient seat-distribution system--abetted by ubiquitous technology, sensors, and facial recognition--would have robbed us of one of the proudest moments in American history.

How are we to compare this to the tweeting-about-yogurt-brings-the-cops scenario, or the-general-elimination-of-all-crime, to the citizen-phone-surveillance scenario? Are they all equally likely? What evidence do we have to evaluate whether these are real thought experiments, predictions, or rhetorical devices? What are the odds, anyway, of any of these stories, even in concept, actually occurring?

Morozov also gives a weak-tea history of Parks herself:

This courageous act was possible because the bus and the sociotechnological system in which it operated were terribly inefficient. The bus driver asked Parks to move only because he couldn't anticipate how many people would need to be seated in the white-only section at the front; as the bus got full, the driver had to adjust the sections in real time, and Parks happened to be sitting in an area that suddenly became "white-only."

Parks did not just happen to be riding the bus in the spot where she was. Rather, she was a committed civil-rights activist with more than 10 years of activism under her belt and a plan for how to disrupt what was already a system designed to minimize disturbances. Parks' training, agency, and forethought are significant because they complicate the freaky scenario Morozov conjures in which no one could possibly find a way to protest a "smart" but unjust system powered by sensors and big data. The Parks incident was a calculated and principled act of defiance that was designed to strike exactly at a weak spot in the segregation system. It makes you think: Wouldn't other activists find their way around even Morozov's most implausibly nightmarish scenario? Not to mention that her act, while important, was one tiny piece of a movement that involved hundreds of thousands of people. Are we really supposed to believe that smart buses would have stopped the civil rights movement?

These are fascinating speculations, informed by all the intellectual weapons of western civilization. But he's going on intuition.

Of course, he might (and does) argue that these systems make it harder for dissidents, that they decrease the probability of people seeing civil disobedience, that the possibility of finding a way around the system is no reason to allow the creation of the system. I agree! And think he is making a good and important warning rooted in deep, serious moral thought. That's precisely why I find myself wishing he had better, more anchored what-ifs.

The point is: These scenarios, and there are dozens and dozens of them, operate on a specific worldview and contain a likely set of actors and outcomes. Each one is an argument, in short, for which Morozov provides only the scantest evidence. These are fascinating speculations, informed by all the intellectual weapons of western civilization. But he's going on intuition.

I like his intuition. I value it. But I don't want to have to take his word for it.

Despite these narrative flaws, much of Morozov's chapter on predictive policing and situational crime prevention is brilliant. As an attack on the ideology of these concepts, it is devastating, especially through a fascinating application of the legal theorist Roger Brownsword's hard-hitting framework on the registers (moral, prudential, practicability) on which regulation can work. You could very well enter this chapter thinking you support predictive policing and come out the other end with a changed mind. It is that persuasive. But the means of raising the emotional stakes, even to this great end, strike me as dangerous. They end up looking like the mirror image of promoters' pamphlets. Like them, Morozov struggles to keep his own imaginings in proper perspective.

Now, I want to turn to one particular example: his drubbing of self-tracking. By focusing on this single case, we can go beyond the general pronouncements about his work to see the brilliant and frustrating individual moves that Morozov uses to make his arguments.

* * *

Morozov likes to build an argument from some anecdotes downward, starting with a seemingly preposterous idea drawn from our current reality, locating its intellectual foundations in a contemporary thinker's work and then drilling down relentlessly from there, looping back to the original target as he goes. In his chapter on self-tracking and the quantified self movement, it is Gary Wolf whom he goes after, and, to a lesser extent, Kevin Kelly. No matter what you think about the critiques themselves, Wolf and Kelly are well-chosen targets as they have been thinking about and promoting the generation of data about oneself for years.

Morozov argues forcefully against first self-quantification, then quantification, then the "numeric imagination," then measurement itself, and, finally, the objective fixity of facts. Do you see what happened there? We went from a debate about whether or not to wear a pedometer to a debate about whether numbers can adequately represent anything in the world. This movement happens with terrifying speed in Morozov's work.

I want to walk through this movement from the base upwards because I think it's his foundational criticisms that tend to be the best, and the arguments get less persuasive the further he gets from the philosophical bases of his objections.

Down at the bottom, Morozov displays a deep, well-founded distrust of the way humans construct models of the world with numbers. Despite considerable controversy, this type of thinking is prevalent and well-supposed by a substantial literature in science and technology studies. "Bruno Latour distinguishes between 'matters of facts,' the old unrealistic way of presenting all knowledge claims as stable, natural, and apolitical," Morozov writes, "and 'matters of concern,' a more realistic mode that recognizes that knowledge claims are usually partial and reflect a particular set of problems, interests, and agendas."

This is a direct attack on whatever claims people might make that they have authority based on the neutral collection of data about "reality." He asks of these modes of investigation: "When do they suppress conflicting interpretations of reality? What do they conceal and make invisible, and is this something we can afford to lose sight of? How might they be invoked in the name of seemingly unrelated political projects?" And how might the answers to those questions change how we understand "the facts," such as they are presented.

These are important questions and they relate directly to his next target: measurement. He quotes a historian who has written about measurement to say, "we . . . need to keep reminding ourselves of the human purposes that led us to create [the measurement] in the first place--and where, if at all, it interferes with any of these purposes." Because our tools will always capture the world in imperfect ways.

To Morozov, quantifying the self is a crime against the self. It forecloses possibilities, narrows one's vision.

Again, this is a vital and important chunk of foundational knowledge that is common in science and technology studies, but absolutely absent from most of the popular rhetoric about data, open or otherwise. Any human survey will have the mark of human hands upon it, and laundering that reality through numbers does not change the underlying nature of these knowledge creation projects.

Moreover, Morozov argues, using these numbers limits the powers of moral and social imagination that we might otherwise employ. "We can further contrast 'narrative imagination' with the somewhat oxymoronic 'numeric imagination,' which can be defined as the predisposition to seek out quantitative and linear casual explanations that have little respect for the complexity of the actual human world." We need to tell ourselves stories about the world (Martha Nussbaum's "narrative imagination") because, as Nussbaum writes, "citizens cannot relate well to the complex world around them by factual knowledge."

OK, but what's so wrong with using numbers anyway? So they are imperfect, but they're better than nothing, one might argue, and at best they are a complement to the narrative imagination, providing a valuable check on the biases of storytelling. (In fact, I will argue this shortly.) Morozov counters that not only do numbers not provide an adequate representation of the world, but they displace all other possible representations. "It's this imperialistic streak of quantification--its propensity to displace other meaningful and possibly intangible ways of talking about a phenomenon--that is so troubling," he writes. This, in turn, leads to a "narrowing of vision." Numeric imagination crowds out narrative imagination.

Soon we reach his main attack on quantification, which contains some of the best sections in the entire book. Here, he commands a flurry of arguments and thinkers to take on quantification as a practice. "Nietzsche understood that quantifiable information might be nothing but low-hanging fruit that is easy to pick but often thwarts more ambitious, more sustained efforts at understanding," he begins the assault. It is hard to measure the things that matter, Morozov asserts, and what you can measure is almost always a simplification of the world. The political and moral assumptions and implications that should have traveled with the quantification get stripped out, letting all those things move unchallenged into discourse. He accuses quantification of laundering politics, essentially, and I think he's damn right a lot of the time.

He makes a smaller, but no less powerful critique of quantification as an enabler of what I call overoptimization. Citing technology critic Steven Talbott, he cites the danger of positive feedback loops driving forward only those aspects of society that can be easily modeled and computed.

We need an ethics of quantification, Morozov cries, and I cry with him. When is it good? When is it bad? How can it be used to further our ends, as opposed to being celebrated as its own end?

And finally, we get to his objections at the very top of this huge pile of philosophy, history, and political theory. You can imagine, if this was your understanding of the world, as rooted in your scholarship, why it might get your hackles up when Gary Wolf says, "Many of our problems come from simply lacking the instruments to understand who we are. ... We lack both the physical and the mental apparatus to take stock of ourselves. We need help from machines."

Wolf, in this account, takes the hit for the entire enterprise of data collection. But he also endures a withering assault for his conception of self. "Members of the Quantified Self movement may not always state this explicitly, but one hidden hope behind self-tracking is that numbers might eventually reveal some deeper inner truth about who we really are, what we really want, and where we really ought to be," Morozov writes. "The movement's fundamental assumption is that the numbers can reveal a core and stable self--if only we get the technology right."

To Morozov, quantifying the self is a crime against the self. It forecloses possibilities, narrows one's vision. And worse, it does it for others, not just you. Privacy, he argues persuasively, can only be understood in social context: What I choose to disclose impacts your future disclosure options.

"Your choice to quantify yourself (for personal preference or profit) thus has deep implications if it necessitates my 'choice' to quantify myself under the pressure of unraveling," he quotes legal scholar Scott Peppet. "What if I just wasn't the sort of person who wanted to know all of this real-time data about myself, but we evolve an economy that requires such measurement? What if quantification is anathema to my aesthetic or psychological makeup; what if it conflicts with the internal architecture around which I have constructed my identity and way of knowing?"

There you have it: measurement, quantification, facts, the possibility of understanding the self through numbers. All are dispatched in one throbbing mass of interconnected passages.

Then Morozov attempts to think through the ethics of quantification in a short section on education and a larger dive into nutrition, calorie-counting, and fitness apps.

And it's here where I think the flaws in Morozov's approach become clear. Despite the rigorous philosophical underpinnings, the sheer thoroughness of the thoughts in this chapter, there's something missing: people. And I don't mean that in a loosey goosey way. His clever use of anecdotes makes it appear as if he's discussing the way that human beings interact with self-tracking devices, but they are not a serious account of practice.

Morozov's book is an innovation- and product-centered account of the deployment of technology. It focuses on marketing rhetoric, on the stories Silicon Valley tells about itself. And it refutes these stories with all the withering contempt that a brilliant person can muster over the course of a few years of dedicated reading and writing. But it does not devote any time to the stories the bulk of technology users tell themselves. It relies on wild anecdotes from newspaper accounts as if they were an adequate representation of the user base of these technologies. In fact, the sample is obviously biased by reporters writing about the people who sound the most out there.

"Celebrating quantification in the abstract, away from the context of its use, is a pointless exercise," Morozov writes, and yet he ends up doing excoriating quantification in the abstract. When he does apply his thinking to the specific case of nutrition aids, it is with some serious handwaving. Calories are not an adequate measure of overall nutrition content, he writes, and thinking narrowly about nutritional content is a boon for food companies, and maybe calories aren't even really the problem. All fine and valid ideas, but knowing how many calories you eat is a good starting point for good health, no? This has been well-established by the medical and public-health literature. And, in any case, tracking one's caloric intake is not a search for a "core and stable self." And if your calorie counter doesn't share your data, it could be a private practice. What if you write it in a book as has been done for decades, or in the iPhone's notes, rather than an official app? Is that OK? What about non-tweeting scales, are those anathema as well? Should the ethical concerns Morozov presents really prevent actual human beings from trying to understand the basics of their food intake?

Or take the use of pedometers, gussied up into packages like the Nike Fuel Band, Jawbone Up, or Fitbit. There are literally hundreds of thousands of pedometers and other activity monitors out there in America, but Morozov does not try to investigate how such devices are used. Are the people buying FitBits and Nike Fuel Bands trying to reveal deep inner truths about themselves? Are they sharing every bit and bite with friends? Or are they trying to lose a few pounds in private?

Look at what Amazon can tell you about the market for these devices: people who bought FitBits recently also bought diet books, scales, and multivitamins. While Morozov locates self-tracking "against the modern narcissistic quest for uniqueness and exceptionalism," it strikes me that I've yet to meet someone wearing a fitness tracker who wasn't engaged in that least unique American activity: weight management.

There are structural reasons for this. Americans are trying to deal with an "obesogenic" environment. Where and how we live is making us fat, relative to Americans of the past and many other countries. Tens of millions of people have low-activity jobs or don't work and access to lots of relatively inexpensive food. We move around in a built environment that militates against actually moving one's body. Of course, there are other non-technological solutions to this problem: reform the Farm Bill, regulate unhealthy foods, change distribution systems in low-income neighborhoods, redesign food consumption experiences under public control, and create denser, more walkable neighborhoods that encourage walking or biking as transportation. And, yes, activists of many different stripes are working on precisely these sorts of proposals.

Journalists like Michael Pollan have spent years explicating these hard, hard problems, and what policies might alleviate them. But reform remains elusive, and not for the reason that Morozov states. "One potential problem with quantification is that it encourages the government not to bother with painful structural changes and simply to delegate all problem solving to citizens," Morozov argues. "Why bother with regulating highly processed foods or improving access to farmers markets and prohibiting fast-food chains from advertising to youngsters? After all, we can simply empower individual citizens to monitor how many calories they consume and not bother with any of these initiatives, pretending that obesity is just the result of weak-willed individuals ignorant of what they are eating."

But the problem is: This is already the default posture that companies exploit to fight agriculture and food-system reform. It is not self-tracking that has created this perception of individuals, nor is it self-tracking corporate dollars that sustain their political fight. The real political action is elsewhere. It is simply not true that wearing a pedometer or other activity monitor is actually hurting activism by giving policymakers a technological, non-collective loophole. Or if it is, those effects are somewhere down there below the top 25 reasons that changing our farms and development practices are difficult political propositions. You'll find it wedged in between the sugar beet and bat guano lobbies, far below where the actual game is.

This is what I mean when I say that Morozov sometimes loses sight of the relative significance of his critiques. Despite all the important foundational work he's done, Morozov falls prey to his own intellectual creation, technology-centrism.

Without a functioning account of how people actually use self-tracking technologies, it is difficult to know how well their behaviors match up with Morozov's accounts of their supposed ideology. While he argues that the numeric and narrative imaginations cannot co-exist, most people are less dogmatic about how data could be used. People are pretty good, I think, at integrating what data they get from the outside world with their own theories of life and experience. We know the number on an odometer is not the only way to judge the condition of a car, and remain susceptible to the stories of a good used car salesperson.

Morozov only supplies a single anecdote of a normal user of self-tracking technology. This account, drawn from Forbes reporter Kashmir Hill's experience, demonstrates precisely that self-tracking will always be embedded within other types of thought, even though Morozov does not recognize it as such.

[Hill] expresses a sense of befuddlement over what to do with the results of one such self-tracking experiment. Thanks to some clever software, she finds out, "I'm happiest when drinking at bars (duh); least happy on planes and at work (ahem); Sunday is my happiest day of the week followed by Wednesday; I'm just as happy alone as with other people, and I'm happier interacting with my ex than with my current boyfriend." What to do now, though, Hill doesn't know. "I'm at a slight loss for what to do with these results. Does this mean I should spend more time in bars and less time at work to optimize my happiness? And should I rethink my relationship?"

The problem is that, as firm, scientific data, these results have no standing. As moral prompts to action or conclusions drawn from months of self-reflection, they hold no standing either, for clearly Hill did not deliberate much about her drinking or working habits in the process of using the software.

Well, first, there's no real reason to think Hill did not deliberate much about her "drinking or working habits." That's just an assumption. Maybe she obsesses about them. Second, she's in the process of using her narrative imagination to connect the data to her life. Isn't this the very way that Morozov wishes people used self-tracking, to gain self-knowledge?

Relative to the caricatures of people using self-tracking devices in the book, I'm guessing most people are a lot more like Hill or me. I like knowing how many steps I've taken as a decent proxy for physical activity. It helps keep me honest about how much exercise I'm getting because otherwise I'm apt to lie to myself: "Well, I didn't go running yesterday, but I walked a lot." I like having a check against my own unreliable narration. Is this some sort of crime against the concept of a subjective self? Why? Is it super important that I only know if I bullshit myself by introspection and no other means?

Or, if I count my steps but I also do yoga, for which I receive no steps, am I somehow unable to reconcile these two things in my own mind? Why wouldn't I see a graph of steps going up, then down, and say to myself, "Oh, those are the days I did yoga." We don't assume the tools are perfect. Who would? We've all used a cell phone. Humans are not idiots.

As for the social privacy concerns Morozov raised, they are well-taken. But again, the way people use these technologies complicates his picture. From what I've seen in health tracking, our social norms are proving remarkably resilient to oversharing. For every weirdo tweeting his weight, there are the other 9,999 people keeping it to themselves. There is no revolution afoot in the way that people deal with health or fitness related information. Most tracking is done in private and held closely. On the service I use, the Jawbone's UP, there is no way to share information to Facebook, and that's by design.

Morozov argues that sharing health data is going to become as widespread as sharing on Facebook (never mind the number of profiles now locked down from prying eyes). But why would this be? He provides no evidence for the value or applicability of the analogy. That's just buying the marketing talk hook, line, and sinker. Morozov is willing to do so because it aids in the argument that self-tracking poses a grave danger to non-trackers; he argues that people who refuse to track will be punished socially and in the health-care marketplace.

But to believe that we'd have to believe: A) fitness and health tracking will be ubiquitous or at least widespread; B) the data captured will be shared in a similarly widespread way; C) this sharing will occur with such ubiquity and force that it will constitute a form of social coercion; D) that non-tracking deviants will be punished for their refusals; E) the shared data will prove predictive and valuable to insurance, health care, and other interested companies.

So far, A is the only proposition here that seems to have any basis out there in the world. B, C, D, and E are all hypothetical propositions that have very little basis, as far as I can tell or have seen. Morozov's intimations that this could change are not evidence that it has actually happened, nor that it is happening, nor that it is likely to happen. There is decent evidence that people are *not* going to become obsessive tracker-sharers. After all, measures we know are correlated with health -- blood pressure, cholesterol, weight, BMI, etc. -- are already widely available with no fancy technology, and you don't see most people sharing these things very willingly outside their doctors' offices. The minority that do share have not reshaped the medical system.

Is Morozov's critique a valuable check on the fantasies of a world transformed by self-tracking devices? Yes, definitely. But given the crushing toll that obesity-related problems are having in America and given the intractability of the political problems creating the obesogenic environment, is it possible that individual-scale solutions could be a partial and temporary aid in people's efforts to lose weight? I think so. To be clear: If a given self-tracking device helps you forestall getting diabetes and losing your limbs, who cares if you incidentally provide support for the thesis of Gary Wolf's book?

On the other hand, Morozov's argument about self-tracking through smart energy and water meters works well. In that case, smart meters really do provide rhetorical cover for corporate and government actors to ignore making larger scale changes in the energy system. And worse, numerous studies have shown that individual-scale efficiency interventions are small potatoes on a percentage basis. Only a few percent of people actively manage their energy usage and, of those, only a few bring it down considerably.

You need a city, state, national, and global solutions to energy, yet politicians want to believe that smart meter deployments that lead to smarter individual energy use can stave off climate change. At best they are good answer not up to the scale of the problem, and at worst, they are legitimate distraction and detriment to climate action. And I say this as someone who has Nest thermostats installed in his home. They're great appliances, but I think my toaster is about as likely to change the planet's fate as they are.

* * *

Morozov has always been a remarkable intellectual hit man. He can bulldoze anybody's ideas about anything. But when the subject has turned to what we should do, rather than what we shouldn't, he is less precise. It's a lot to ask of a critic to both demolish the existing ideology of technology and replace it with something better, but Morozov has never had small ambitions. Yet his advice, distilled from all the theory and scholarship available, consists of rather hoary exhortations:

  • "The trick here is to resist the simplifying temptations of techno-optimism and techno-pessimism and to assess each case of technological intervention on its own merits."
  • "We'd be far better off examining individual technologies on their own terms, liberated from the macroscopic fetishes of Silicon Valley."
  • "We must not fixate on what this new arsenal of digital technologies allows us to do without first inquiring what is worth doing."
  • "But we should not lose sight of the benefits that subjectivity plays in art; much good art is meant to shock and provoke."
  • "This doesn't mean that we should encourage our politicians to lie, just that we should remember that lies can often serve enabling functions, and while in many cases they will be enabling corruption and laziness, in others they will enable compromise and hope."
  • "Once we leave the confines of the grandiose debates about 'Technology' and 'the Internet,' another way of talking and thinking becomes possible, one that is technologically literate, attentive to details, mindful of legal and economic circumstances, and historically informed. It doesn't reject technological solutions per se; it just wants to question their appropriateness in each and every situation and perhaps to design a way for the community to continue debating such appropriateness even once a seemingly tiny and inconsequential technology engenders a giant sociotechnological system to support itself."

And it's worth asking: Who is the "we" in all of this? It's the we of the op-ed idiom, of course. But Morozov devotes such attention to actors and institutions, individual CEOs and thinkers. He requires such specificity from others. Yet in his public policy calls, suddenly, the actors recede and the putative societal we emerges.

He does have a few excellent suggestions in his positive program. One sparkling idea is an audit board for algorithms, allowing companies to maintain secrecy while ensuring that anti-social or discriminatory practices have not been encoded within them. Another is a brief sketch of a "post-Internet" model for thinking about digital technologies. These things are good.

It is in using things that users discover and transform what those things are. Examining ideology is important. But so is understanding practice.

In his final chapter, Morozov attempts to describe a method of gadget making that meets his ethical criteria. The products he points to in his final chapter are, to put it frankly, broken. He's taken design fictions that are meant to encourage "user-unfriendliness" and put them at the center of what technology should be. Appliances that act erratically when your energy usage rises. A radio set that changes stations when energy usage rises. An extension cord that twists in pain when devices in standby mode are left plugged into it. A lamp that dims unless you keep touching it.

I think he's made these technologies into the means; broken things make you focus on their brokenness, not whatever the brokenness is supposed to point to. Would a car that randomly runs out of gas make you consider the pipeline infrastructure and ecological destruction that our oil economy requires? Or would you just go get a new car? His advice is not the sort of thing technologists can follow.

Morozov acknowledges that, "without a thorough theoretical scaffolding, all these 'erratic appliances' and 'technological troublemakers' can be easily dismissed as quirks of fancy postmodern designers," but the truth is: No matter what theoretical scaffolding you give them, no one wants a radio that gets fuzzy when it's near electrical fields. Almost no one will use these things.

That's important because it is in using things that users discover and transform what those things are. Examining ideology is important. But so is understanding practice. What will make Morozov's account so generative is precisely how much has been left out about how people use things. People like David Edgerton at Imperial College London have argued that scholars who study "technology" need to break away from thinking about it as an advancing wave of new things and focus on what people are actually using, day-by-day.

I remember sitting with Morozov at Stanford in March of last year, when he told me that his goal for the work was to destroy the concept of "the Internet" in the way that historians of science had destroyed the concept of "science." But try asking a scientist if that's happened. The Berkeley anthropologist of science, Paul Rabinow, put it well. "A major gap has developed today between scientists' self-representation and the representations of scientists by those who study them," he wrote in a 1996 book. "While this discrepancy is of little consequence for practicing scientists (most will have never heard of its existence), it provides much of the subject matter and the authority for the social studies of science."

And while many scientists haven't noticed they've lost some authority in the rest of the academy or among the public at large, others cannot escape this fact. I think the worst consequence of destabilizing scientists' authority in the public sphere has been to give fertilizer and sunshine to climate change skeptics. The skeptics' publications on climate institutions and personalities are like weaponized science and technology studies papers. And we may all end up paying the price of inaction as a result of their incredibly effective lobbying.

If it is worth pointing out that there are costs to any technological solution, as Morozov does, it is also worth noting that ideas can have costs, too. We don't know how Morozov's arguments will be deployed in the future, but I wouldn't doubt it will sometimes be by people who want to support the continuance of unjust political and social arrangements.

Imagine how words like these might be applied by someone other than Morozov:

That so much of our cultural life is inefficient or that our politicians are hypocrites or that bipartisanship slows down the political process or that crime rates are not yet zero--all of these issues might be problematic in some limited sense, but they do not necessarily add up to a problem worth solving.