Alexis Madrigal is a senior editor at The Atlantic, where he oversees the Technology channel. He's the author of Powering the Dream: The History and Promise of Green Technology. More
The New York Observer calls Madrigal "for all intents and purposes, the perfect modern reporter." He co-founded Longshot magazine, a high-speed media experiment that garnered attention from The New York Times, The Wall Street Journal, and the BBC. While at Wired.com, he built Wired Science into one of the most popular blogs in the world. The site was nominated for best magazine blog by the MPA and best science Web site in the 2009 Webby Awards. He also co-founded Haiti ReWired, a groundbreaking community dedicated to the discussion of technology, infrastructure, and the future of Haiti.
He's spoken at Stanford, CalTech, Berkeley, SXSW, E3, and the National Renewable Energy Laboratory, and his writing was anthologized in Best Technology Writing 2010 (Yale University Press).
Madrigal is a visiting scholar at the University of California at Berkeley's Office for the History of Science and Technology. Born in Mexico City, he grew up in the exurbs north of Portland, Oregon, and now lives in Oakland.
In 1967, the National Heart Institute and the Atomic Energy Agency began a ten-year effort to develop an artificial heart powered by plutonium-238. The atomic hearts would have pumped human blood with the energy provided by the radioactive decay of that isotope. The effort failed thanks to technical challenges, intra-governmental infighting, and the souring of the public mood about both medical devices and atomic energy, but it remains a fascinating episode at the confluence of two grand American dreams.
This is the story told by Shelley McKellar, who teaches the history of medicine at at the University of Western Ontario in the most recent issue of the quarterly journal Technology and Culture.
The Federally funded programs continued for a decade, sometimes at cross-purposes, and they foreshadowed the rhetoric that came to surround later attempts at creating other types of artificial hearts in the 1980s. There are lessons to be learned, McKellar implies, about how people receive a particular technology changes along with the social and regulatory environment. Ideas that make sense one decade can seem totally ridiculous ten years later.
But, you might be asking yourself, "What in the hell was anyone even thinking trying to stick a radioisotope generator into a human being's chest cavity?"
If you take the goal for an artificial heart to be the true replacement of the human heart in perpetuity, then power becomes a primary concern, trumping all other engineering constraints. When contractors like Westinghouse Electric and McDonnell-Douglas offered bids for the government work, they made sure to note the atomic solution as the only possibility.
"Each proposal declared the radioisotope-powered engine as the only possible energy solution for a completely implantable device." McKellar explained. "The ideal implantable device meant no external lines or connections from the patient to outside power sources and a ten-year reliability span. By comparison, conventional batteries required recharging multiple times each day from an external source and would need to be explanted from patients every two years."And, if you're a promoter of the value of radioisotopes in all things, then you might go looking for places where power is a primary concern. As one William Mott, who became the project coordinator the Atomic Energy Commission's atomic heart program put it, "We were always on the alert for new problems to match with our solutions."
Looking back, it's fascinating how confident the scientists of the time were that the engineering challenges of embedding a radioactivity-powered device into a body could be overcome. The NHI and AEC battled over the proper way of conducting the research: the NHI created a non-atomic intermediary device that they implanted into animals, while the AEC promoted an all-at-once design strategy. But both agencies saw the problems as fundamentally soluble.
With the benefit of 50 years of hindsight, we know that, so far at least, there is no "ideal implantable device." Total artificial hearts (as distinguished from heart assist devices) are, at best, a stopgap measure. They're used to as a last-ditch bridge measure while patients await transplants of other human hearts. We've learned a lot of other things about cardiology in the last 50 years, but one thing remains: nothing we can make comes close to working as well as your heart except another human heart.
That is to say: The craziest part of the atomic artificial heart program wasn't the atomic part.
The media of the drone war is not like the media of World War II or Vietnam. Largely, it does not exist outside official government releases. We see the aftermath of explosions, sometimes, but almost never the actual movements of unmanned aerial vehicles as they strike in Somalia or Afghanistan. The secretive and globe-spanning nature of the conflict means that journalists are rarely close to the action. And even if they were positioned nearby, it would be next to impossible to catch a drone in an act of war.
And yet, James Bridle notes, this image, nominally of a Reaper drone, exists and it is everywhere.
He calls it "the most widely reproduced image" of a drone and says it's become the "canonical" version of the technology. Because of its ubiquity it has come to symbolize the drone war, at least within some technological domains like Google Images, where it is the first result returned when you search "drone."
And the picture, decontextualized and then recontextualized, even shows up on the streets of Karachi. Here, we see a protester posing in front of a poster-sized version for a Reuters photographer.
But working on a hunch, Bridle did a little snooping and discovered that the image is a fiction, one that has come to represent the very real drone war.
The Canon Drone is indeed entirely unreal. A close inspection, and comparison with other Reaper images, including 09-4066, bears this out almost immediately. The level of detail is too low: missing hatches on the cockpit and tail, the shape of the air intake, the greebling on the fins and body. That 'NY' on the tail: it's not aligned properly, it's a photoshop. Finally, the Canon Drone's serial, partly obscured, appears to be 85-566. The first two numbers of USAF serials refer to the year an aircraft entered service: there were no Reapers back in 1985 (development didn't even begin until 2001).
The Canon Drone does not exist, it never has. It is computer generated rendering of a drone, a fiction. It flies over an abstracted landscape - although perhaps the same one as another canonical image, this Predator in flight, which, while unmarked, at least appears worn enough to be believable.
When I tweeted this story, user @piombo, did some quick sleuthing. He dropped the image into Google and added the text search "rendered." It popped up within a forum devoted to 3D modeling in a February 2009 post by Michael Hahn, who created this image. I emailed Hahn to learn more about how the image was created. He sent over a quick narrative and the original rendering from the 3D modeling software package MODO."I then pieced together the planes insignia for references images found on wikipedia and google searches," Hahn said. "I choose the 174th attack wing insignia because they are located about 20 miles from where I live." That got the image to this state:
The background came from a now-difficult-to-find Flickr image of the Afghani landscape, and through the magic of Photoshop, Hahn had created this (check out the layers on the right side):
None of which answers why this particular rendering became the top ranking image of a drone, though Hahn has some ideas.
"I am not sure how it become the number one image of drones," Hahn told me. "I think at the time I created it was one of the few images available. The only places I posted the image online were to a couple 3d sites. Here. and here. People must have got the image from either one of those sites."
Why'd people buy this image, which, on even a little closer inspection is clearly a rendering? Bridle thinks drones "always appear otherworldly." And truly, even in photographs I know are real, they seem more rendering than material object.
And, as importantly, I also think Americans craved (and crave) some way of understanding the war part of the drone war. How do these things actually work? How do they fire? How do they kill?
Hahn hinted at something like this in his own process. "I had never seen an image of a drone actually firing a missile so that is what I decided to create," he said. And suddenly, everyone else, who also had never seen a drone actually firing a missile, had a way of seeing with their own eyes.
When a series of EF5 tornadoes, the most powerful on the scale, hit Alabama and areas of surrounding states, houses were torn apart, their contents scattered by the winds. Almost all the photographs, diplomas, magazines, and objects were lost, but a few were found thanks to a collective effort organized through a Facebook page created by Patty Bullion, a resident of Lester, Alabama, population 111.
"I got on Facebook right after the storm," Bullion told ABC News about the page's creation. "A friend of mine who lives down the road posted that it was raining pictures -- falling out of the sky."
"A friend of mine who lives down the road posted that it was raining pictures."
The page she made, called "Pictures and Documents found after the April 27, 2011 Tornadoes," began with items she found in her own yard, but expanded as more people heard about the page and contributed belongings they'd found. Within a year, more than 100,000 people had "liked" the page and 1,700 items were returned to their owners through the simple matchmaking of the project.
This attracted the attention of John Knox, a weather and climate scientist at the University of Georgia. He'd studied meteorology at the University of Wisconsin-Madison, where Charles Anderson had done a pathbreaking study on the debris fallout from the Barneveld Wisconsin tornado, and was familiar with the work of John Snow at the University of Oklahoma, who extended the study of debris through aggregating historical newspaper accounts. Both efforts suffered from the same defect: it was hard to build a large enough dataset to offset the low precision of many reports. In the past, it was simply logistically, practically difficult to find a lot of people who had both lost and found items.
That is, until Bullion created her Facebook page, and through word-of-mouth, people across the region made it into the hub for returning items to their owners. Knox knew a novel dataset when he saw one, and he contacted Bullion, who allowed his students to access her Facebook account. They painstakingly took the postings and turned them into structured data that they could study. Out of respect for tornado victims, Knox decided against contacting people who'd lost items, sacrificing some data and precision. He called his decision-making process "data mining with a heart."
With that limitation in place, they set about figuring out which objects had defined beginning and endpoints. They were aided by the fact that many of the towns in which people lost and found items were geographically small, so they could circumscribe both poles of the trajectory easily. Still, they had to throw out 800 objects for which they could not ascertain decent geo-data.
What remained was the most impressive database of tornado debris takeoff and landing points ever assembled. The largest previous dataset (Snow's) had 163 objects drawn from decades of historical accounts. This was 934 objects from a single tornado outbreak.
In tornado studies, new work with dual-pol radar has been showing that debris gets very, very high in these storms. Riding 100 mile-per-hour updrafts within a tornado and slower but still strong updrafts within their parent thunderstorms means that light objects and paper are ending up miles in the air.
Some pieces of paper were transported 200 miles by the storms.
"There is a real sense [debris] is going up at least six kilometers into the storm," Knox said. "What I'm hearing from meteorologists who are using the dual-pol radar technology is that they are seeing debris at 20,000 feet and sometimes more."
Which would explain how, in Knox's study, some pieces of paper debris ended up more than 200 miles away. Their hypothesis, as noted, is that the debris shoots up the tornadoes, where much of it is held aloft for around 100 miles, and that tends to fall slightly to the left of the tornado track, as the storms are pushed north by winds from the south. But some debris seems to end up riding the updrafts up and right out of the top of the thunderstorms. Up there, it would meet with the jetstream, which would push the debris a long way and land it farther eastward than the tornado track or other debris.
"Trajectories based on the takeoff and landing points of lost-and-found objects revealed that most debris was deposited 10 degrees to the left of the average tornado track vector," Knox and his co-authors wrote. "However, objects that traveled the longest distance were found approximately 5 degrees to the right of the average tornado track vector."
That would explain the results we see below, where some debris has shifted over the paths of other objects in an eastward direction. "That had not been seen in any previous study, but it makes a lot of sense. Once you see it, you say, 'Oh, that's what happened,'" Knox told me.
As for the artifacts that became Knox's data, a new page has sprung up on Facebook created by professional photo restorers. They're going through the photographs from the storm and trying to put the pieces back together again. Anyone can find it at, "Pictures and Documents found after April 21, 2011 Tornadoes RESTORED."
Here, we see children riding the casing for a nuclear weapon.
The casing is like the one that contained Fat Man, one of the two nuclear bombs that the United States dropped on Japan at the end of World War II. Americans deployed the original Fat Man on Nagasaki, where it killed an estimated 74,000 people. The casing you see above is located at White Sands Missile Range, near the Trinity Site, where the first Bomb was tested.
Via Alex Wellerstein, nuclear historian at the American Institute of Physics
Tucked inside Carl Zimmer's wonderful and thorough feature on de-extinction, a topic that got a TEDx coming out party last week, we find a tantalizing, heartbreaking anecdote about the time scientists briefly, briefly brought an extinct species back to life.
The story begins in 1999, when scientists determined that there was a single remaining bucardo, a wild goat native to the Pyrenees, left in the world. They named her Celia and wildlife veterinarian Alberto Fernández-Arias put a radio collar around her neck. She died nine months later in January 2000, crushed by a tree. Her cells, however, were preserved.
Working with the time's crude life sciences tools, José Folch led a Franco-Spanish team that attempted to bring the bucardo, as a species, back from the dead.
It was not pretty. They injected the nuclei from Celia's cells into goat eggs that had been emptied of their DNA, then implanted 57 of them into different goat surrogate mothers. Only seven goats got pregnant, and of those, six had miscarriages. Which meant that after all that work, only a single goat carried a Celia clone to term. On July 30, 2003, the scientists performed a cesarean section.
Here, let's turn the narrative over to Zimmer's story:
As Fernández-Arias held the newborn bucardo in his arms, he could see that she was struggling to take in air, her tongue jutting grotesquely out of her mouth. Despite the efforts to help her breathe, after a mere ten minutes Celia's clone died. A necropsy later revealed that one of her lungs had grown a gigantic extra lobe as solid as a piece of liver. There was nothing anyone could have done.
A species had been brought back. And ten minutes later it was gone again. Zimmer continues
The notion of bringing vanished species back to life--some call it de-extinction--has hovered at the boundary between reality and science fiction for more than two decades, ever since novelist Michael Crichton unleashed the dinosaurs of Jurassic Park on the world. For most of that time the science of de-extinction has lagged far behind the fantasy. Celia's clone is the closest that anyone has gotten to true de-extinction. Since witnessing those fleeting minutes of the clone's life, Fernández-Arias, now the head of the government of Aragon's Hunting, Fishing and Wetlands department, has been waiting for the moment when science would finally catch up, and humans might gain the ability to bring back an animal they had driven extinct.
"We are at that moment," he told me.
That may be. And the tools available to biologists are certainly superior. But there's no developed ethics of de-extinction, as Zimmer elucidates throughout his story. It may be possible to bring animals that humans have killed off back from extinction, but is it wise, Zimmer asks?
"The history of putting species back after they've gone extinct in the wild is fraught with difficulty," says conservation biologist Stuart Pimm of Duke University. A huge effort went into restoring the Arabian oryx to the wild, for example. But after the animals were returned to a refuge in central Oman in 1982, almost all were wiped out by poachers. "We had the animals, and we put them back, and the world wasn't ready," says Pimm. "Having the species solves only a tiny, tiny part of the problem."
Maybe another way to think about it, as Jacquelyn Gill argues in Scientific American, is that animals like mammoths have to perform (as the postmodern language would have it) their own mammothness within the complex social context of a herd.
When we think of cloning woolly mammoths, it's easy to picture a rolling tundra landscape, the charismatic hulking beasts grazing lazily amongst arctic wildflowers. But what does cloning a woolly mammoth actually mean? What is a woolly mammoth, really? Is one lonely calf, raised in captivity and without the context of its herd and environment, really a mammoth?
Does it matter that there are no mammoth matriarchs to nurse that calf, to inoculate it with necessary gut bacteria, to teach it how to care for itself, how to speak to other mammoths, where the ancestral migration paths are, and how to avoid sinkholes and find water? Does it matter that the permafrost is melting, and that the mammoth steppe is gone?...
Ultimately, cloning woolly mammoths doesn't end in the lab. If the goal really is de-extinction and not merely the scientific equivalent of achievement unlocked!, then bringing back the mammoth means sustained effort, intensive management, and a massive commitment of conservation resources. Our track record on this is not reassuring.
In other words, science may be able to produce the organisms, but society would have to produce the conditions in which they could flourish.
Staring down at a zipper, it makes little sense that this object's two sets of teeth would line the primary means of egress for one's penis during everyday bathroom use.
The original American zipper brand was Talon, for crying out loud.
Are there no alternatives? Of course there are. Zippers were not even in common usage until the 1920s, we find in Robert Friedel's study, "Zipper: An Exploration in Novelty." In 1937, a zipper company memo held, "Retailers were made to worry that they could be held legally liable if a man injured himself with the newfangled machine on his trouser fly." Nowadays, button-fly pants abound, selling alongside their more dangerous brethren. Also, velcro exists. We don't need zippers.
Perhaps zippered pants remain in circulation because harm to one's genitals only exists in jokes or urban legend. As University of Utah folklorist Jan Brunvand would have it, "[F]olkloric zipper stories, especially stories involving troublesome zipper flies on men's trousers, became part of the cultural history of the product."
Brunvand continues, "The possibility of a man zipping part of himself into a pants zipper fly must occur to many men." But really, who would believe that this happens?
Not even when one is in a real hurry or the hole formed by the fly is uncomfortably narrow or in a dimly lit bathroom could such a grave mistake ever be made. No one actually gets his penis stuck in the zipper of his pants, right?
Wrong. A new paper in urology journal BJU International puts data to the folklore: "Zip-related genital injury."
Between 2002 and 2010, 17,616 people went to the emergency room with zip-related genital injuries. And as the University of California, San Francisco team put it, "The penis was almost always the only genital organ involved." (Which is good news for testicles everywhere.) Those roughly 2,000 injuries per year represent about one-fifth of annual penile injuries and "amongst adults, zips were the most frequent cause of penile injuries."
The authors conclude that the problem affects both adults and children and that "practitioners should be familiar with various zip-detachment strategies for these populations."
For our age of lowered expectations, a new benediction: May you never have to become familiar with any zip-detachment strategy.
Via Brian Frank
A novel fear enters the nightmares of modern life: being snatched from above by a robot with an eagle-like talon.
Most days, American military drones engaged in combat across the world are scary enough. But some days, swarms of little drones are scarier. Other days it's drones with really, really high-resolution cameras. Or drones deployed by Homeland Security.
Today, The Verge brings word of a novel kind of drone behavior, as freaky as the last. This unmanned aerial vehicle has a claw dangling beneath it designed -- like an eagle talon -- for snatching stuff at high speed. We're talking "pickup velocities" of two to three meters per second, or 7 miles
380 miles per hour! Which is a little less terrifying.
Take a look at the video. I don't think my anti-drone hoodie -- or scarf -- would save me.
Evgeny Morozov's second book is a brilliant, confounding work of creative destruction.
What critics of literature do not both love and hate the subject of their scholarship? The very strength of a critic's love is what inspires such dogged meaning- and fault-finding in the reality of any work.
This is also true for writer and thinker Evgeny Morozov, though it is not literature but technology that must bear the privilege of his evisceration. His books read like letters from a jilted lover, full of accusations of unmet promises, lost potential, and occasionally, a glimmer of that initial spark of attraction.
And he is a truly great critic. Morozov's work reveals new things about how technology works in our society at this particular moment in time. His analysis may be cutting, but he doesn't hate technology. On the contrary, Morozov's ultimate goal is to destroy the ideology of technology, so that particular technologies can be used in specific situations without the baggage of other people's nonsense.
Morozov's second book, To Save Everything, Click Here: The Folly of Technological Solutionism, is the most wide-ranging and generative critique of digital technology I've ever read. There's so much substance to argue about between its covers. At the center of it all, there's a brilliant, idiosyncratic mind at work.
Describing and destroying two concepts -- "Internet-centrism" and "solutionism" -- form the core of his book, and both are fascinating frames for the discourse surrounding our network technologies.
Internet-centrism is the idea that our society, and particularly its public intellectuals, have become fascinated by the notion that the Internet is a stable and coherent force in our lives. He rails against the idea that this force shapes things autonomously, or that it has any inherent qualities, or that we have to listen to what "the Internet" wants on a topic like openness, for example. Morozov's goal is to force everyone to write the Internet with quotes -- like this: "the Internet." This, he feels, better implies the complexities of the Internet's social creation and casts doubt on its power as an independent force with its own ahistorical rules.
His analysis here is a full-frontal attack on the shorthand thinking that's come to dominate many discussions about the role of digital technologies in the world. It's a valuable contribution in many ways; he demands that we think seriously about the Internet, I mean, "the Internet." I do think that Morozov has succeeded in doing a lot of damage to the idea that "'the Internet' is a useful analytical category." And to perform a deconstruction in public and for a general reader is a feat of magic that borders on necromancy. Who knew people still wanted to read books like this?
Morozov's "solutionism" is something else altogether. In it, he's identified a key strain of modern political and social thought, synthesizing a wide variety of domains, technologies, and types of arguments into something we can ponder and argue about. I find myself coming back to this idea time and again while listening to advocates and opponents of particular technologies. I would not be surprised if describing the contours, origins, and failings of this way of thought are what Morozov is remembered for. I think it will become the concept that generates its own set of literature. He writes:
Recasting all complex social situations either as neat problems with definite, computable solutions or as transparent and self-evident processes that can be easily optimized--if only the right algorithms are in place!--this quest is likely to have unexpected consequences that could eventually cause more damage than the problems they seek to address. I call the ideology that legitimizes and sanctions such aspirations "solutionism." I borrow this unabashedly pejorative term from the world of architecture and urban planning, where it has come to refer to an unhealthy preoccupation with sexy, monumental, and narrow-minded solutions--the kind of stuff that wows audiences at TED Conferences--to problems that are extremely complex, fluid, and contentious. These are the kinds of problems that, on careful examination, do not have to be defined in the singular and all-encompassing ways that "solutionists" have defined them; what's contentious, then, is not their proposed solution but their very definition of the problem itself. Design theorist Michael Dobbins has it right: solutionism presumes rather than investigates the problems that it is trying to solve, reaching "for the answer before the questions have been fully asked." How problems are composed matters every bit as much as how problems are.
This analysis, which runs throughout the entire book, is really, really interesting. I'm going to get into the details soon, but my main worry is that solutionism, even accepting Morozov's framing, contains some elements worth preserving. Indeed, there is a reading of this book (an unkind one, for sure) that finds it deeply anti-progressive and almost frighteningly supportive of the status quo in politics and elsewhere.
All of which makes the book a delight: It's a high-wire performance, a feat of intellectual daring. He goes to war with almost everybody else who thinks about the Internet's people, institutions, and technologies in the public eye: Nicholas Carr, Clay Shirky, David Weinberger, Tim Wu, Kevin Kelly, Farhad Manjoo, Steven Johnson, Gary Wolf, among others. Sometimes, he wins easily. Sometimes, he tangles himself into knots trying to defeat every possible enemy and defend against every possible counterargument. In all cases, he is worth reading, even if you vehemently disagree.
The only comparable experience I've ever had was reading Vaclav Smil on energy: Frustrating, enlightening, and counterintuitive in the best meaning of the word. Of course, if you're interested in the Internet, you should read it. And I think historians in 2030 or 2050 will use this book to highlight the anxieties and debates of our time, pretty much all of which it attempts to tackle at once.
On the other hand, I'm not sure that they'll read it as a fair or grounded representation of the state of technology. Morozov's mode, while learned and theoretically grounded, is not as deeply authoritative as it appears. There is little actual evidence that many of the phenomena he highlights are actually occurring in the way he says they are. Granted, that's not his focus. I recognize fully that Morozov's project is in the realm of ideas, ideology, and the sociology of knowledge.
But there's often not even an attempt to line up reality with his anecdotes and projections. He relies time and again on scenarios, little flights of fancy, that are neither thought experiment nor forecast, but something more opaque. Even within the logic of the book itself, it's difficult to compare his scenarios to one another. There's little consistency among them in terms of plausibility or time-scale. If you look closely, you're left wondering: is this something that is already happening, might happen in a year, could happen in 10 years, or is a logical possibility in a century? Quite-close-to-real scenarios are delivered with the same rhetorical weight as truly wild Morozovian nightmares.
Let me give just a few examples.
Here's one section from Morozov's chapter on predictive policing in which he introduces a real product, ShotSpotter, a microphone sensor system that lets police (in Oakland, say) identify where gunshots are fired. Watch how he slides from there to a much stranger idea without blinking:
These systems are not cheap--ShotSpotter reportedly charges $40,000 to $60,000 a year per square mile--but they are hardly the latest word in crime detection. Why bother with expensive microphones if smartphones can do the job just fine? It all boils down to designing an appealing and nonintrusive app and creating the right incentives--perhaps by appealing to the moral conscience of citizens or by turning crime reports into a game--so that citizens can take on some of the tasks of faulty sensors and easily distracted human.
From an actual, deployed system to a nightmarish sousveillance scenario in one sentence. Could such a system work? Would this be appealing to institutional players or people? Why even bother, if you're the cops? Is anyone even thinking about doing this in one, five, or even 20 years? What gives him the idea this might happen? I don't know. There's certainly nothing in the quite extensive (and welcome) footnotes to explain this leap.
When police talk about predictive policing, they're talking about putting cops in the areas most likely to experience a crime. That's actually a far cry from "preventing" crime. In practice, the Los Angeles Police Department, which Morozov uses as his example, only has officers to patrol a tiny percentage of the city, even in the zones where a model might say crime is most likely to occur. The institutional reality of the LAPD is that they could never prevent a substantial percentage of all crimes, even if they knew precisely, not probabilistically, where such activity might occur. The same is true for every police department. So how likely is it that we'd prevent all crime, as Morozov impishly suggests throughout the chapter? It's not that I expect him to deal in the probabilities, but to couch his criticisms within a realistic framework.
Or how likely is it that tweeting about yogurt will bring police to your door? Is it more or less likely than the other scenarios Morozov discusses?
As companies like ECM Universe accumulate extensive archives of tweets and Facebook updates sent by actual criminals, they will also be able to predict the kinds of nonthreatening verbal cues that tend to precede criminal acts. Thus, even tweeting that you don't like your yogurt might bring police to your door, especially if someone who tweeted the same thing three years before ended up shooting someone in the face later in the day.
Or that acts of civil disobedience will become impossible, as in this remarkable bit of short fiction?
Now, imagine that [Rosa] Parks is riding one of the smart buses of the near future. Equipped with sensors that know how many passengers are waiting at the nearest stop, the bus can calculate the exact number of African Americans it can transport without triggering conflict; those passengers who won't be able to board or find a seat are sent polite text messages informing them of future pickups. A smart facial-recognition scheme--powered by video cameras at bus stops--keeps count of how many people of each race are waiting to board and divides the bus into two white and black sections accordingly. The bus driver--if there still is one--can tap into a big-data computer portal that, much like predictive software for police, produces historical estimates of how many black people are likely to be riding that day and calculates the odds of racial tension based on the weather, what's in the news, and the social-networking profiles of specific people at the bus stop. Those passengers most likely to cause tension on board are simply denied entry. Will this new transportation system be convenient? Sure. Will it give us a Rosa Parks? Probably not, because she would never have gotten to the front of the bus to begin with. The odds are that a perfectly efficient seat-distribution system--abetted by ubiquitous technology, sensors, and facial recognition--would have robbed us of one of the proudest moments in American history.
How are we to compare this to the tweeting-about-yogurt-brings-the-cops scenario, or the-general-elimination-of-all-crime, to the citizen-phone-surveillance scenario? Are they all equally likely? What evidence do we have to evaluate whether these are real thought experiments, predictions, or rhetorical devices? What are the odds, anyway, of any of these stories, even in concept, actually occurring?
Morozov also gives a weak-tea history of Parks herself:
This courageous act was possible because the bus and the sociotechnological system in which it operated were terribly inefficient. The bus driver asked Parks to move only because he couldn't anticipate how many people would need to be seated in the white-only section at the front; as the bus got full, the driver had to adjust the sections in real time, and Parks happened to be sitting in an area that suddenly became "white-only."
Parks did not just happen to be riding the bus in the spot where she was. Rather, she was a committed civil-rights activist with more than 10 years of activism under her belt and a plan for how to disrupt what was already a system designed to minimize disturbances. Parks' training, agency, and forethought are significant because they complicate the freaky scenario Morozov conjures in which no one could possibly find a way to protest a "smart" but unjust system powered by sensors and big data. The Parks incident was a calculated and principled act of defiance that was designed to strike exactly at a weak spot in the segregation system. It makes you think: Wouldn't other activists find their way around even Morozov's most implausibly nightmarish scenario? Not to mention that her act, while important, was one tiny piece of a movement that involved hundreds of thousands of people. Are we really supposed to believe that smart buses would have stopped the civil rights movement?
These are fascinating speculations, informed by all the intellectual weapons of western civilization. But he's going on intuition.
Of course, he might (and does) argue that these systems make it harder for dissidents, that they decrease the probability of people seeing civil disobedience, that the possibility of finding a way around the system is no reason to allow the creation of the system. I agree! And think he is making a good and important warning rooted in deep, serious moral thought. That's precisely why I find myself wishing he had better, more anchored what-ifs.
The point is: These scenarios, and there are dozens and dozens of them, operate on a specific worldview and contain a likely set of actors and outcomes. Each one is an argument, in short, for which Morozov provides only the scantest evidence. These are fascinating speculations, informed by all the intellectual weapons of western civilization. But he's going on intuition.
I like his intuition. I value it. But I don't want to have to take his word for it.
Despite these narrative flaws, much of Morozov's chapter on predictive policing and situational crime prevention is brilliant. As an attack on the ideology of these concepts, it is devastating, especially through a fascinating application of the legal theorist Roger Brownsword's hard-hitting framework on the registers (moral, prudential, practicability) on which regulation can work. You could very well enter this chapter thinking you support predictive policing and come out the other end with a changed mind. It is that persuasive. But the means of raising the emotional stakes, even to this great end, strike me as dangerous. They end up looking like the mirror image of promoters' pamphlets. Like them, Morozov struggles to keep his own imaginings in proper perspective.
Now, I want to turn to one particular example: his drubbing of self-tracking. By focusing on this single case, we can go beyond the general pronouncements about his work to see the brilliant and frustrating individual moves that Morozov uses to make his arguments.
Morozov likes to build an argument from some anecdotes downward, starting with a seemingly preposterous idea drawn from our current reality, locating its intellectual foundations in a contemporary thinker's work and then drilling down relentlessly from there, looping back to the original target as he goes. In his chapter on self-tracking and the quantified self movement, it is Gary Wolf whom he goes after, and, to a lesser extent, Kevin Kelly. No matter what you think about the critiques themselves, Wolf and Kelly are well-chosen targets as they have been thinking about and promoting the generation of data about oneself for years.
Morozov argues forcefully against first self-quantification, then quantification, then the "numeric imagination," then measurement itself, and, finally, the objective fixity of facts. Do you see what happened there? We went from a debate about whether or not to wear a pedometer to a debate about whether numbers can adequately represent anything in the world. This movement happens with terrifying speed in Morozov's work.
I want to walk through this movement from the base upwards because I think it's his foundational criticisms that tend to be the best, and the arguments get less persuasive the further he gets from the philosophical bases of his objections.
Down at the bottom, Morozov displays a deep, well-founded distrust of the way humans construct models of the world with numbers. Despite considerable controversy, this type of thinking is prevalent and well-supposed by a substantial literature in science and technology studies. "Bruno Latour distinguishes between 'matters of facts,' the old unrealistic way of presenting all knowledge claims as stable, natural, and apolitical," Morozov writes, "and 'matters of concern,' a more realistic mode that recognizes that knowledge claims are usually partial and reflect a particular set of problems, interests, and agendas."
This is a direct attack on whatever claims people might make that they have authority based on the neutral collection of data about "reality." He asks of these modes of investigation: "When do they suppress conflicting interpretations of reality? What do they conceal and make invisible, and is this something we can afford to lose sight of? How might they be invoked in the name of seemingly unrelated political projects?" And how might the answers to those questions change how we understand "the facts," such as they are presented.
These are important questions and they relate directly to his next target: measurement. He quotes a historian who has written about measurement to say, "we . . . need to keep reminding ourselves of the human purposes that led us to create [the measurement] in the first place--and where, if at all, it interferes with any of these purposes." Because our tools will always capture the world in imperfect ways.
To Morozov, quantifying the self is a crime against the self. It forecloses possibilities, narrows one's vision.
Again, this is a vital and important chunk of foundational knowledge that is common in science and technology studies, but absolutely absent from most of the popular rhetoric about data, open or otherwise. Any human survey will have the mark of human hands upon it, and laundering that reality through numbers does not change the underlying nature of these knowledge creation projects.
Moreover, Morozov argues, using these numbers limits the powers of moral and social imagination that we might otherwise employ. "We can further contrast 'narrative imagination' with the somewhat oxymoronic 'numeric imagination,' which can be defined as the predisposition to seek out quantitative and linear casual explanations that have little respect for the complexity of the actual human world." We need to tell ourselves stories about the world (Martha Nussbaum's "narrative imagination") because, as Nussbaum writes, "citizens cannot relate well to the complex world around them by factual knowledge."
OK, but what's so wrong with using numbers anyway? So they are imperfect, but they're better than nothing, one might argue, and at best they are a complement to the narrative imagination, providing a valuable check on the biases of storytelling. (In fact, I will argue this shortly.) Morozov counters that not only do numbers not provide an adequate representation of the world, but they displace all other possible representations. "It's this imperialistic streak of quantification--its propensity to displace other meaningful and possibly intangible ways of talking about a phenomenon--that is so troubling," he writes. This, in turn, leads to a "narrowing of vision." Numeric imagination crowds out narrative imagination.
Soon we reach his main attack on quantification, which contains some of the best sections in the entire book. Here, he commands a flurry of arguments and thinkers to take on quantification as a practice. "Nietzsche understood that quantifiable information might be nothing but low-hanging fruit that is easy to pick but often thwarts more ambitious, more sustained efforts at understanding," he begins the assault. It is hard to measure the things that matter, Morozov asserts, and what you can measure is almost always a simplification of the world. The political and moral assumptions and implications that should have traveled with the quantification get stripped out, letting all those things move unchallenged into discourse. He accuses quantification of laundering politics, essentially, and I think he's damn right a lot of the time.
He makes a smaller, but no less powerful critique of quantification as an enabler of what I call overoptimization. Citing technology critic Steven Talbott, he cites the danger of positive feedback loops driving forward only those aspects of society that can be easily modeled and computed.
We need an ethics of quantification, Morozov cries, and I cry with him. When is it good? When is it bad? How can it be used to further our ends, as opposed to being celebrated as its own end?
And finally, we get to his objections at the very top of this huge pile of philosophy, history, and political theory. You can imagine, if this was your understanding of the world, as rooted in your scholarship, why it might get your hackles up when Gary Wolf says, "Many of our problems come from simply lacking the instruments to understand who we are. ... We lack both the physical and the mental apparatus to take stock of ourselves. We need help from machines."
Wolf, in this account, takes the hit for the entire enterprise of data collection. But he also endures a withering assault for his conception of self. "Members of the Quantified Self movement may not always state this explicitly, but one hidden hope behind self-tracking is that numbers might eventually reveal some deeper inner truth about who we really are, what we really want, and where we really ought to be," Morozov writes. "The movement's fundamental assumption is that the numbers can reveal a core and stable self--if only we get the technology right."
To Morozov, quantifying the self is a crime against the self. It forecloses possibilities, narrows one's vision. And worse, it does it for others, not just you. Privacy, he argues persuasively, can only be understood in social context: What I choose to disclose impacts your future disclosure options.
"Your choice to quantify yourself (for personal preference or profit) thus has deep implications if it necessitates my 'choice' to quantify myself under the pressure of unraveling," he quotes legal scholar Scott Peppet. "What if I just wasn't the sort of person who wanted to know all of this real-time data about myself, but we evolve an economy that requires such measurement? What if quantification is anathema to my aesthetic or psychological makeup; what if it conflicts with the internal architecture around which I have constructed my identity and way of knowing?"
There you have it: measurement, quantification, facts, the possibility of understanding the self through numbers. All are dispatched in one throbbing mass of interconnected passages.
Then Morozov attempts to think through the ethics of quantification in a short section on education and a larger dive into nutrition, calorie-counting, and fitness apps.
And it's here where I think the flaws in Morozov's approach become clear. Despite the rigorous philosophical underpinnings, the sheer thoroughness of the thoughts in this chapter, there's something missing: people. And I don't mean that in a loosey goosey way. His clever use of anecdotes makes it appear as if he's discussing the way that human beings interact with self-tracking devices, but they are not a serious account of practice.
Morozov's book is an innovation- and product-centered account of the deployment of technology. It focuses on marketing rhetoric, on the stories Silicon Valley tells about itself. And it refutes these stories with all the withering contempt that a brilliant person can muster over the course of a few years of dedicated reading and writing. But it does not devote any time to the stories the bulk of technology users tell themselves. It relies on wild anecdotes from newspaper accounts as if they were an adequate representation of the user base of these technologies. In fact, the sample is obviously biased by reporters writing about the people who sound the most out there.
"Celebrating quantification in the abstract, away from the context of its use, is a pointless exercise," Morozov writes, and yet he ends up doing excoriating quantification in the abstract. When he does apply his thinking to the specific case of nutrition aids, it is with some serious handwaving. Calories are not an adequate measure of overall nutrition content, he writes, and thinking narrowly about nutritional content is a boon for food companies, and maybe calories aren't even really the problem. All fine and valid ideas, but knowing how many calories you eat is a good starting point for good health, no? This has been well-established by the medical and public-health literature. And, in any case, tracking one's caloric intake is not a search for a "core and stable self." And if your calorie counter doesn't share your data, it could be a private practice. What if you write it in a book as has been done for decades, or in the iPhone's notes, rather than an official app? Is that OK? What about non-tweeting scales, are those anathema as well? Should the ethical concerns Morozov presents really prevent actual human beings from trying to understand the basics of their food intake?
Or take the use of pedometers, gussied up into packages like the Nike Fuel Band, Jawbone Up, or Fitbit. There are literally hundreds of thousands of pedometers and other activity monitors out there in America, but Morozov does not try to investigate how such devices are used. Are the people buying FitBits and Nike Fuel Bands trying to reveal deep inner truths about themselves? Are they sharing every bit and bite with friends? Or are they trying to lose a few pounds in private?
Look at what Amazon can tell you about the market for these devices: people who bought FitBits recently also bought diet books, scales, and multivitamins. While Morozov locates self-tracking "against the modern narcissistic quest for uniqueness and exceptionalism," it strikes me that I've yet to meet someone wearing a fitness tracker who wasn't engaged in that least unique American activity: weight management.
There are structural reasons for this. Americans are trying to deal with an "obesogenic" environment. Where and how we live is making us fat, relative to Americans of the past and many other countries. Tens of millions of people have low-activity jobs or don't work and access to lots of relatively inexpensive food. We move around in a built environment that militates against actually moving one's body. Of course, there are other non-technological solutions to this problem: reform the Farm Bill, regulate unhealthy foods, change distribution systems in low-income neighborhoods, redesign food consumption experiences under public control, and create denser, more walkable neighborhoods that encourage walking or biking as transportation. And, yes, activists of many different stripes are working on precisely these sorts of proposals.
Journalists like Michael Pollan have spent years explicating these hard, hard problems, and what policies might alleviate them. But reform remains elusive, and not for the reason that Morozov states. "One potential problem with quantification is that it encourages the government not to bother with painful structural changes and simply to delegate all problem solving to citizens," Morozov argues. "Why bother with regulating highly processed foods or improving access to farmers markets and prohibiting fast-food chains from advertising to youngsters? After all, we can simply empower individual citizens to monitor how many calories they consume and not bother with any of these initiatives, pretending that obesity is just the result of weak-willed individuals ignorant of what they are eating."
But the problem is: This is already the default posture that companies exploit to fight agriculture and food-system reform. It is not self-tracking that has created this perception of individuals, nor is it self-tracking corporate dollars that sustain their political fight. The real political action is elsewhere. It is simply not true that wearing a pedometer or other activity monitor is actually hurting activism by giving policymakers a technological, non-collective loophole. Or if it is, those effects are somewhere down there below the top 25 reasons that changing our farms and development practices are difficult political propositions. You'll find it wedged in between the sugar beet and bat guano lobbies, far below where the actual game is.
This is what I mean when I say that Morozov sometimes loses sight of the relative significance of his critiques. Despite all the important foundational work he's done, Morozov falls prey to his own intellectual creation, technology-centrism.
Without a functioning account of how people actually use self-tracking technologies, it is difficult to know how well their behaviors match up with Morozov's accounts of their supposed ideology. While he argues that the numeric and narrative imaginations cannot co-exist, most people are less dogmatic about how data could be used. People are pretty good, I think, at integrating what data they get from the outside world with their own theories of life and experience. We know the number on an odometer is not the only way to judge the condition of a car, and remain susceptible to the stories of a good used car salesperson.
Morozov only supplies a single anecdote of a normal user of self-tracking technology. This account, drawn from Forbes reporter Kashmir Hill's experience, demonstrates precisely that self-tracking will always be embedded within other types of thought, even though Morozov does not recognize it as such.
[Hill] expresses a sense of befuddlement over what to do with the results of one such self-tracking experiment. Thanks to some clever software, she finds out, "I'm happiest when drinking at bars (duh); least happy on planes and at work (ahem); Sunday is my happiest day of the week followed by Wednesday; I'm just as happy alone as with other people, and I'm happier interacting with my ex than with my current boyfriend." What to do now, though, Hill doesn't know. "I'm at a slight loss for what to do with these results. Does this mean I should spend more time in bars and less time at work to optimize my happiness? And should I rethink my relationship?"
The problem is that, as firm, scientific data, these results have no standing. As moral prompts to action or conclusions drawn from months of self-reflection, they hold no standing either, for clearly Hill did not deliberate much about her drinking or working habits in the process of using the software.
Well, first, there's no real reason to think Hill did not deliberate much about her "drinking or working habits." That's just an assumption. Maybe she obsesses about them. Second, she's in the process of using her narrative imagination to connect the data to her life. Isn't this the very way that Morozov wishes people used self-tracking, to gain self-knowledge?
Relative to the caricatures of people using self-tracking devices in the book, I'm guessing most people are a lot more like Hill or me. I like knowing how many steps I've taken as a decent proxy for physical activity. It helps keep me honest about how much exercise I'm getting because otherwise I'm apt to lie to myself: "Well, I didn't go running yesterday, but I walked a lot." I like having a check against my own unreliable narration. Is this some sort of crime against the concept of a subjective self? Why? Is it super important that I only know if I bullshit myself by introspection and no other means?
Or, if I count my steps but I also do yoga, for which I receive no steps, am I somehow unable to reconcile these two things in my own mind? Why wouldn't I see a graph of steps going up, then down, and say to myself, "Oh, those are the days I did yoga." We don't assume the tools are perfect. Who would? We've all used a cell phone. Humans are not idiots.
As for the social privacy concerns Morozov raised, they are well-taken. But again, the way people use these technologies complicates his picture. From what I've seen in health tracking, our social norms are proving remarkably resilient to oversharing. For every weirdo tweeting his weight, there are the other 9,999 people keeping it to themselves. There is no revolution afoot in the way that people deal with health or fitness related information. Most tracking is done in private and held closely. On the service I use, the Jawbone's UP, there is no way to share information to Facebook, and that's by design.
Morozov argues that sharing health data is going to become as widespread as sharing on Facebook (never mind the number of profiles now locked down from prying eyes). But why would this be? He provides no evidence for the value or applicability of the analogy. That's just buying the marketing talk hook, line, and sinker. Morozov is willing to do so because it aids in the argument that self-tracking poses a grave danger to non-trackers; he argues that people who refuse to track will be punished socially and in the health-care marketplace.
But to believe that we'd have to believe: A) fitness and health tracking will be ubiquitous or at least widespread; B) the data captured will be shared in a similarly widespread way; C) this sharing will occur with such ubiquity and force that it will constitute a form of social coercion; D) that non-tracking deviants will be punished for their refusals; E) the shared data will prove predictive and valuable to insurance, health care, and other interested companies.
So far, A is the only proposition here that seems to have any basis out there in the world. B, C, D, and E are all hypothetical propositions that have very little basis, as far as I can tell or have seen. Morozov's intimations that this could change are not evidence that it has actually happened, nor that it is happening, nor that it is likely to happen. There is decent evidence that people are *not* going to become obsessive tracker-sharers. After all, measures we know are correlated with health -- blood pressure, cholesterol, weight, BMI, etc. -- are already widely available with no fancy technology, and you don't see most people sharing these things very willingly outside their doctors' offices. The minority that do share have not reshaped the medical system.
Is Morozov's critique a valuable check on the fantasies of a world transformed by self-tracking devices? Yes, definitely. But given the crushing toll that obesity-related problems are having in America and given the intractability of the political problems creating the obesogenic environment, is it possible that individual-scale solutions could be a partial and temporary aid in people's efforts to lose weight? I think so. To be clear: If a given self-tracking device helps you forestall getting diabetes and losing your limbs, who cares if you incidentally provide support for the thesis of Gary Wolf's book?
On the other hand, Morozov's argument about self-tracking through smart energy and water meters works well. In that case, smart meters really do provide rhetorical cover for corporate and government actors to ignore making larger scale changes in the energy system. And worse, numerous studies have shown that individual-scale efficiency interventions are small potatoes on a percentage basis. Only a few percent of people actively manage their energy usage and, of those, only a few bring it down considerably.
You need a city, state, national, and global solutions to energy, yet politicians want to believe that smart meter deployments that lead to smarter individual energy use can stave off climate change. At best they are good answer not up to the scale of the problem, and at worst, they are legitimate distraction and detriment to climate action. And I say this as someone who has Nest thermostats installed in his home. They're great appliances, but I think my toaster is about as likely to change the planet's fate as they are.
Morozov has always been a remarkable intellectual hit man. He can bulldoze anybody's ideas about anything. But when the subject has turned to what we should do, rather than what we shouldn't, he is less precise. It's a lot to ask of a critic to both demolish the existing ideology of technology and replace it with something better, but Morozov has never had small ambitions. Yet his advice, distilled from all the theory and scholarship available, consists of rather hoary exhortations:
And it's worth asking: Who is the "we" in all of this? It's the we of the op-ed idiom, of course. But Morozov devotes such attention to actors and institutions, individual CEOs and thinkers. He requires such specificity from others. Yet in his public policy calls, suddenly, the actors recede and the putative societal we emerges.
He does have a few excellent suggestions in his positive program. One sparkling idea is an audit board for algorithms, allowing companies to maintain secrecy while ensuring that anti-social or discriminatory practices have not been encoded within them. Another is a brief sketch of a "post-Internet" model for thinking about digital technologies. These things are good.
It is in using things that users discover and transform what those things are. Examining ideology is important. But so is understanding practice.
In his final chapter, Morozov attempts to describe a method of gadget making that meets his ethical criteria. The products he points to in his final chapter are, to put it frankly, broken. He's taken design fictions that are meant to encourage "user-unfriendliness" and put them at the center of what technology should be. Appliances that act erratically when your energy usage rises. A radio set that changes stations when energy usage rises. An extension cord that twists in pain when devices in standby mode are left plugged into it. A lamp that dims unless you keep touching it.
I think he's made these technologies into the means; broken things make you focus on their brokenness, not whatever the brokenness is supposed to point to. Would a car that randomly runs out of gas make you consider the pipeline infrastructure and ecological destruction that our oil economy requires? Or would you just go get a new car? His advice is not the sort of thing technologists can follow.
Morozov acknowledges that, "without a thorough theoretical scaffolding, all these 'erratic appliances' and 'technological troublemakers' can be easily dismissed as quirks of fancy postmodern designers," but the truth is: No matter what theoretical scaffolding you give them, no one wants a radio that gets fuzzy when it's near electrical fields. Almost no one will use these things.
That's important because it is in using things that users discover and transform what those things are. Examining ideology is important. But so is understanding practice. What will make Morozov's account so generative is precisely how much has been left out about how people use things. People like David Edgerton at Imperial College London have argued that scholars who study "technology" need to break away from thinking about it as an advancing wave of new things and focus on what people are actually using, day-by-day.
I remember sitting with Morozov at Stanford in March of last year, when he told me that his goal for the work was to destroy the concept of "the Internet" in the way that historians of science had destroyed the concept of "science." But try asking a scientist if that's happened. The Berkeley anthropologist of science, Paul Rabinow, put it well. "A major gap has developed today between scientists' self-representation and the representations of scientists by those who study them," he wrote in a 1996 book. "While this discrepancy is of little consequence for practicing scientists (most will have never heard of its existence), it provides much of the subject matter and the authority for the social studies of science."
And while many scientists haven't noticed they've lost some authority in the rest of the academy or among the public at large, others cannot escape this fact. I think the worst consequence of destabilizing scientists' authority in the public sphere has been to give fertilizer and sunshine to climate change skeptics. The skeptics' publications on climate institutions and personalities are like weaponized science and technology studies papers. And we may all end up paying the price of inaction as a result of their incredibly effective lobbying.
If it is worth pointing out that there are costs to any technological solution, as Morozov does, it is also worth noting that ideas can have costs, too. We don't know how Morozov's arguments will be deployed in the future, but I wouldn't doubt it will sometimes be by people who want to support the continuance of unjust political and social arrangements.
Imagine how words like these might be applied by someone other than Morozov:
That so much of our cultural life is inefficient or that our politicians are hypocrites or that bipartisanship slows down the political process or that crime rates are not yet zero--all of these issues might be problematic in some limited sense, but they do not necessarily add up to a problem worth solving.
If you're swimming in the Black Sea, beware dolphins with weapons strapped to their heads.
Update! Sad news, friends. It turns out that one piece of the Ukrainian dolphin story is, in fact, a hoax. No dolphins from the Ukrainian army's complement have actually escaped, according to this newspaper report. The hoax began with this faked report from the museum director, which led to a story by RIA Novosti. The strangest thing about this is how plausible the whole thing actually is. Gregg studies dolphins for a living and did not seem skeptical. That's because the US and Ukrainian military do indeed have dolphins, which they've been, according to previous reports, training for combat. A reader wrote in to tell me that when he was a young sailor in Turkey, this beluga whale was rumored to have escaped from a military installation in Crimea. That is to say, the oddest part of this story -- that dolphins have regularly been used in the military -- is unchanged. But the specifics turn out to be a hoax. Our apologies for the mistake. In recompense, allow me to give you this video about the history of militarized dolphins.
Dolphin scientist Justin Gregg brings us this slightly disturbing, if hilarious, bit of Delphic news. The Ukrainian military has apparently lost three of its trained dolphins in the Black Sea. Which might not be so bad, except.... Well, Gregg sets it up perfectly:
Uh oh - it seems the Ukrainian Navy has a small problem on their hands. After rebooting the Soviet Union's marine mammal program just last year with the goal of teaching dolphins to find underwater mines and kill enemy divers, three of the Ukrainian military's new recruits have gone AWOL. Apparently they swam away from their trainers this morning ostensibly in search of a "mate" out in open waters. It might not be such a big deal except that these dolphins have been trained to "attack enemy combat swimmers using special knives or pistols fixed to their heads." So if you are planning a family holiday to the Black Sea this week, I think it's best you avoid any "friendly" dolphins that might approach - especially if they have KNIVES or PISTOLS strapped to their heads.
Who would not want to watch the film adaptation of this story? It'd sort of be like abstract expressionist painting plus Free Willy plus Rambo. And it'd be told from the perspective of the dolphins with subtitles for their clicks. And filmed in 3D and at 48 frames a second. It would be directed by Werner Herzog. The first hour and twenty-eight minutes would be dolphins eating fish, the last two minutes would be them saving the world from terrorist combat swimmers.
The Ukrainian navy's dolphin program has a long pedigree. The Bulletin of Atomic Scientists noted that trainers there inherited the Soviet military's 70 trained dolphins after the Soviet Union collapsed. Some of them were retrained to help with child therapy and other civilian tasks. The others? Well...
The best services arise from the combination of machine and human intelligences.
There is an analogy to be made to one of Google's other impressive projects: Google Translate. What looks like machine intelligence is actually only a recombination of human intelligence. Translate relies on massive bodies of text that have been translated into different languages by humans; it then is able to extract words and phrases that match up. The algorithms are not actually that complex, but they work because of the massive amounts of data (i.e. human intelligence) that go into the task on the front end.
Google Maps has executed a similar operation. Humans are coding every bit of the logic of the road onto a representation of the world so that computers can simply duplicate (infinitely, instantly) the judgments that a person already made.
The Times story is well worth reading for its catalog of similar operations at other companies like Twitter, Apple, IBM, and some startups. The point is not that machines are not powerful or that humans are irreplaceable in some fixed sense. The point is that the best services are cyborg: they arise from the combination of machine and human intelligences.
As Manfred Clynes and Nathan Kline, the co-coiners of the term "cyborg," wrote in 1960, "The purpose of the Cyborg, as well as his own homeostatic systems, is to provide an organizational system in which such robot-like problems are taken care of automatically and unconsciously, leaving man free to explore, to create, to think, and to feel."
Fifty-three years later, I think the jury is still out on whether or not his initial hope was correct.
Just casually coding some feminism into Nintendo's heroic myths.
Reflecting the mores and market dynamics of the videogame industry, Mario has always been the hero, the guy saving damsels in distress. But Mike Mika, a father with a young daughter, decided that it would be fun to mod Donkey Kong so that the damsel was the hero and Mario was the one captured by Kong.
It's obviously not the only game with a female lead -- Tomb Raider springs to mind -- but this is rewriting one of the foundational Nintendo games and myths. Megan Farokhmanesh at Polygon found the video this weekend. Here's Mike on why he did it:
My three year old daughter and I play a lot of old games together. Her favorite is Donkey Kong. Two days ago, she asked me if she could play as the girl and save Mario. She's played as Princess Toadstool in Super Mario Bros. 2 and naturally just assumed she could do the same in Donkey Kong. I told her we couldn't in that particular Mario game, she seemed really bummed out by that. So what else am I supposed to do? Now I'm up at midnight hacking the ROM, replacing Mario with Pauline.
Hacker dads. They can be awesome.
Electronics vending machines show that Facebook trusts its employees to do the right thing.
I'd heard tell that Facebook's IT department had scattered vending machines filled with headphones, power cords, and sundry other electronics across the campus. But it was not until I was walking through the company's headquarters last week and saw one in the wild that I came to understand what I really like about the concept.
IT gets put in an uncomfortable place in most companies. They hold the keys to a line-item on the budget that pretty much all managers would like to keep small. If they give employees too much stuff, they are blamed. If they don't give employees what they want, they are blamed. It's not a fun place to be, I'm sure, to ask each abashed employee, "Why do you need a new power cord again?"
But the Facebook system is different. No person controls the supplies of the small items. For example, they have nice Sennheiser headphones inside this vending machine. Any Facebook employee can simply walk up, swipe his or her ID card, and grab a new pair. There's a nominal price listed, but employees don't see that number debited from their paychecks or anywhere, really, outside of the IT vending machine. For them, it's simply swipe and go. The system trusts them to use their own judgment about what they need.
Of course, trust but verify. And yes, the system also verifies. The swipe means that everyone's requests are tracked and I'm sure some algorithm somewhere is constantly sorting the data to see if anyone has pulled 10 sets of headphones out of the system.
Still, I like the assumption that employees will do the right thing. The default is that you can have whatever you want. And that lets IT relax a bit, I bet. They no longer have to act as guardians of the electronics horde.
Sure beats little green men.
Before considering this theory further, we must bear in mind a few of the proved facts about Mars. It has atmosphere, seasons, land, water, storms, clouds and mountains. It also rains and snows on Mars, as it does with us. Great white patches appear periodically upon its surface. These may be accumulations of snow and they have also been called "eyes."
The biz ain't what it used to be, but then again, for most people, it never really was.
Man, I feel everyone on how scary it is to be in journalism. When I made the transition from a would-be fiction career paired with writing research reports into full-time journalism, I nearly drowned in a sea of debt and self-doubt. I was writing posts on my own blog, which almost nobody read, but it did, with an assist from my now-wife, get me a couple gigs writing for some known websites. I got paid $12 a post by one. The other was generous, and I got $50. I was grateful as hell to have this toehold in the world. I remember walking down Bartlett Street in the Mission and saying to myself, out loud, "I'm a writer. I'm a writer! I'M A WRITER!" It was all I'd wanted to be since I was 16 years old. And I was making it.
Except I was not making it. Every day that went by, I was draining the little bit of money I had. I started selling anything I'd acquired to that point in my life that had any value. After the last Craigslist purchaser walked away with my stuff, I stood there in the living room of our apartment staring at the books and crying.
I had so little money and so much debt that any time I had to go to an ATM, I was seized with horrible anxiety. I practically could only do it drunk. You know those ATMs that display your balance EVEN WHEN YOU TELL THEM NOT TO? Well, I hate those ones. I would take my money and as it displayed my balance on the screen, I would carefully unfocus my eyes so I couldn't really tell how little I had. The credit crunch was happening and I didn't have any credit left. My loving, wonderful, brilliant parents were going through a rough patch, too, and they couldn't help, either. I was tortured by the idea that I'd taken on this new career when my family needed me. I asked myself whether I should have stayed at the hedge fund job that I took right out of college and hated so much I quit before the summer ended.
I sometimes hoped that the whole world would collapse -- it certainly seemed possible back then -- because my debt would be swept away along with the rest of civilization. My dad had once said, right during the credit crisis, "Don't worry, we'll all be potato farmers soon anyway." And I would think about that and it would *make me happy*. At least then I wouldn't worry that I was going to be torn apart at the seams by the demands of a work life that couldn't even keep me afloat in an expensive city. I really, really resented people who could count on financial support from places unknown. They didn't seem to get how hard it was to keep it together when you might drown under your own debt at any minute.
Like an idiot, I figured I could write a book and use the advance to pay off my debt. That kind of worked, though the process of doing the book melted my brain. I was so tired and my mind was so filled with words that I would forget where I was, almost coming to in supermarket aisles wondering why I was staring at mangoes. I hate mangoes. But at least the money gave me some breathing room. I could approach an ATM without feeling weak in the knees.
So, all this to say: I know the pressure these debts can put on you. I know how angry it makes you, at yourself, at other people, at the world. Why didn't I save more? Why did I buy that thing? Why did I have to pick up that tab when I didn't have any goddamn money? How could I support a family like this? Why won't the world recognize my talent is worth more!?
And so when Nate Thayer published emails with our newest editor (second week on the job), I can see how that might happen. How you might finish writing your last email, "No offense taken," and then staring at your blog's CMS that night, decide, you know, what? I'm tired of writing for peanuts, because fuck that. And if a young journalist in her first week on the job was part of the collateral damage, hey, the world just isn't fair, kid. Pay it forward.
I get it, but it was still a nasty thing to do.
I'm glad Thayer's post has garnered him lots of attention. He is a great journalist and I genuinely hope the spotlight gets him more work. Don't get me wrong. I'm still incensed by what he did, but I want journalists to prosper because I believe, like he does, that what we do is vital.
Let me show my colors here. I am an Atlantic person. I love this place. I feel it in my bones. If I open up one of our musty tomes at the office, I can get sucked in for an hour just looking at the ads, or marveling at the eloquence of W.E.B. DuBois. When I look back at old Ta-Nehisi posts or see Fallows in the halls, I can get emotional. I was watching Ken Burns' National Parks documentary, and he notes, offhandedly, how stories that ran in our magazine helped preserve Yosemite for future generations. He talks about how we published this wild holy man, John Muir, thereby promoting the idea of National Parks, which as Burns' rightly argues is one of the best and most populist ideas to ever become law in this country. These are my people. These are my colors. This is my institution, my connection to a legacy and a lineage. And if you come after one of us, if you come after it, I am not going to take it lying down.
And so ... Twitter was a contact sport yesterday. I practically put in my old mouthgard from football practice. Seemed like every reload brought another attacker and it was instinct, really, to keep them away from my QB, Olga. I know how to block. I know how to hit. You can just see me at my computer, sweating, steam (or is it smoke?) coming out of my ears. Bring it. And I hate that mode. I hate it. It makes me feel bad and say fuck a lot and I TYPE IN ALL CAPS. I want to do those pushups where you clap in-between because I just get so much something, emotion, intensity, adrenaline, running in my veins. (Much love to Becca Rosen, my brilliant, grounded lieutenant for telling me to put down the twitter and pet the kitty and go for a run. That was a good call, as always, BR.)
But that's not what this should all be about, if by "all," I mean the maelstrom kicked up by Thayer. Because the truth is, I don't have a great answer for Nate Thayer, or other freelancers who are trying to make it out there. It was never an easy life, but there were places who would pay your expenses to go report important stories and compensate you in dollars per word, not pennies. You could research and craft. And there were outlets -- not a ton, but some -- that could send you a paycheck that would keep you afloat.
Then the digital transition came. The ad market, on which we all depend, started going haywire. Advertisers didn't have to buy The Atlantic. They could buy ads on networks that had dropped a cookie on people visiting The Atlantic. They could snatch our audience right out from underneath us. And besides, who knew how well online ads worked anyway? I mean, who knows how well any ads work at all? But convention had established that print ads were a thing people paid X amount for, and digital ads became something people paid 0.10X for.
So far, there isn't a single model for our kind of magazine that appears to work.
And while advertisers paid less, there was always more stuff for people to read. All kinds of writing poured onto the web. The median post was much worse than a random story plucked from the top tier of magazines, but the best stuff was and is as good as anything. Drawing on that huge base, there is always a lot of "best stuff" to read now.
The main way to sell ads is to go "cross-platform" pairing digital with print and whatever else (events or video, say). This is what "the marketplace" is asking for. So you need ad inventory online. In some cases, like ours or Wired's, you need a lot of ad inventory online. It is a little more complicated than this, but that means you need page views, and if you want page views, you need people coming to your site. You need unique visitors.
If you can show me a way that this can be reversed for a large general-interest magazine, I would love to hear about it. So far, there isn't a single model for our kind of magazine that appears to work.
Seriously, though, what's a magazine like The Atlantic (or The New Yorker or The New Republic or Harper's or The New York Times Magazine) to do then? Could the print model -- smallish editorial staff, large writer pool paid by the word -- work online?
Let me give you this hypothetical. You are a digital editor at a fine publication. You are in charge of writing some stuff, commissioning some stuff, editing some stuff. Maybe you have an official traffic goal, or (more likely), you want to be awesome, qualitatively and quantitatively. A lot of people in this business are driven from the inside out, and you almost have to be given the daily demands. You have to want to be jacked into the Internet all day long, every day. This is not the life most journalists imagined when they were looking at 1970s magazines. In any case, you want to crush, as I would call it.
And your total budget for the year is $12,000, a thousand bucks a month. (We could play this same game with $36,000, too. The lessons will remain the same.) What do you do?
Here are some options:
1. Write a lot of original pieces yourself. (Pro: Awesome. Con: Hard, slow.)
2. Take partner content. (Pro: Content! Con: It's someone else's content.)
3. Find people who are willing to write for a small amount of money. (Pro: Maybe good. Con: Often bad.)
4. Find people who are willing to write for no money. (Pro: Free. Con: Crapshoot.)
5. Aggregate like a mug. (Pro: Can put smartest stuff on blog. Con: No one will link to it.)
6. Rewrite press releases so they look like original content. (Pro: Content. Con: You suck.)
Don't laugh. These are actual content strategies out there in the wilds of the Internet. I am sure you have encountered them.
Myself, I'm very partial to one and five. I hate two and six. For my own purposes here, let's say you do, too, and throw them out.
That leaves three and four, which I want to discuss here.
Let's stipulate two things: 1) I want people who want to make a living writing to be able to do so. 2) I do not think it is very easy to make a living writing freelance for digital-only publications for the reasons described below.
Most sites -- save the NYT, Drudge, and a handful of others -- can't send massive amounts of readers to stories. Traffic causality runs the other way: Individual stories live or die out there in the social world and that brings readers to theatlantic.com. A post has to succeed on its own, although a bigger brand, with more social tools and bigger homepage treatment can give it what I call "activation energy," the necessary but not sufficient first push into the web.
This is actually a great argument for long form and other quality pieces of analysis or reportage. People share them because they are definitive or delightful or interesting. And that brings good to the site.
But here's the weird thing: While the top six or seven viral hits might make up 15-20 percent of a given month's traffic, the falloff after that is steep. And once you're out of the top 20 or 30 stories, a really, really successful story is only driving 0.5 percent or less of a place like The Atlantic's monthly traffic. But that's the best-case scenario. In most cases, even great reported stories will fizzle, not spark. They will bring in 1,000 or 3,000 or 5,000 or 10,000 visitors. You'd need thousands of these to make a big site go.
I can already see some old-school journalists tearing up. This poor kid, he looks at the numbers and ergo, that's all he cares about. "Traffic," they spit. And I get it. The word has been used to bludgeon you into dumb shit. To put great stories on the shelf to build slideshows. To give up on quality and focus on quantity. I do get all that. But that's precisely why we (journalists) must understand the numbers! The business side of any publication knows them inside and out. If we don't understand how to tell good stories with our own data, who do you think wins any argument that involves data, which they all do? You can know money is important without succumbing to the idea that cash rules everything around you.)
Let me try to convince you of this: We can have binocular vision. We can understand these numbers. And we can know that the mission of a place like The Atlantic is to bring moral purpose, interesting ideas, great arguments, and excellent reporting to the world and to drive these stories as far as they will go into the public consciousness.
Furthermore, looking at the numbers teaches you about the social reality of the Internet. In a very real sense, unless you look at the numbers, you do not know what (the dynamic sociotechnical space that is) the Internet looks like. Your view lets you see its boulevards and parks, but it is like a photograph from the 1850s when the exposure times were too long to capture moving people. Your Paris is empty.
OK, sorry, I will wipe the spittle off my screen now.
What do the numbers mean for an editor's strategy?
Here are the basics:
One, you gotta take a lot of shots. Hypothetically, let's say you devote an entire month to one single story, betting the house on it. In the very best circumstance, a viral hit heard round the world with a big traditional media push, you'd do maybe 800,000 uniques. And then you'd have to do the same thing the next month. In practice, no one can do this. Because you can't predict that viral hit. While the best stuff tends to do far, far better than average, it is not always the best stuff that hits virally. You can't control all the variables of the world's attention and some dudes at Reddit who really like stories about legalizing pot seeing *your particular story* about legalizing pot. In practical terms in the social world, there ain't no levers to pull! We write, we hope, we pray, we tweet. And that's it. So, you need to post frequently to make luck more likely to strike you.
Second, you want to become a node. And to become a node, you need to do things that inculcate trust from your readers, and you need to keep doing that over and over. In the digital world, we build the distribution networks day by day, and if you don't feed them, they shrink. So again, you need some basic level of posts.
Third, you need to do great stuff. But hell, you're posting all the time! How do you do great stuff? You find ways to optimize between speed and quality. Everyone has their own coping strategies. And it's always gonna be a tradeoff. In my view, you want to do the fast things as fast as possible so you can slow cook the other stuff. You trust your readers to know which is which (because they get it).
And where do freelancers fit in all this? Think about all these numbers. You are going to need dozens of successful posts, and because you can't control precisely what succeeds, that means even a small blog, with one person at the helm, is going to need, say, 100-150 posts a month.
If you've got $1000, that means you can count on paying 10 people $100. That gets you about 10 percent of the way. And now you've got to edit and handhold 10 people and (probably) take a lot of shit from people who think they are (and in fact, are) worth more than that. Run this same scenario with $3,000 a month. Or $4,000. (Perhaps you would decide, as we have, to hire another staffer instead of devoting $48,000 in freelance money to get 40 percent of the way to what you want.)
Or you could pay one person $1000, or $1/word for a great reported story about something awesome that you are almost sure will be a hit. OK, now you're to, say, 5 percent of your traffic goal and you're out of money. BUT THAT ONE PERSON IS PSYCHED. Run this same analysis with more money again. You can never get there paying a dollar a word, no matter how you scale up the money. And, your frequency is declining rapidly. You are becoming a less important node.
You have to want to be jacked into the Internet all day long, every day. This is not the life most journalists imagined when they were looking at 1970s magazines.
Perhaps you try to cut a deal with two people to blog for you several times a week for $500/month. That's 24 posts. And that almost seems workable as you scale up the money. In fact, we do this at The Atlantic and so do many other publications. But my perception is that no one feels satisfied with this arrangement. It's all the pressure of a full-time gig without the rewards. And on the editor side, the production tends to be uneven. The worst part is: It's hard to make someone part of your editorial mission when they're in this kind of position. You can't tell them about Ralph Waldo Emerson and Truman Capote and have them feel that they are part of this tradition.
No matter how you slice it with a small freelance budget, paying people is going to get you a very small amount of the way to your own internal or external goals. And if you think it is the ad-supported model, look at how Andrew Sullivan's Daily Dish is doing. They are going to support a staff of five with the money they collect.
And so we return to the main topic at hand: what about people who write for free?
Let me state two things here. One, this can never be the backbone of an editorial strategy. It just won't work unless you screw everybody, including your readers. Two, I have cut all kinds of deals myself on this topic. I don't like to ask people for work that we can't pay for. But I'm not willing to take a hardline and prevent someone who I think is great from publishing with us without pay. My main point and (to be normative about it) the main point in these negotiations is this: What do you, the writer, get out of this?
But the fact is, a lot of people *do* get stuff out of it. They're changing careers into journalism, say. Or they're a scholar who wants to reach a broader audience. Or they've got a book coming out. Or they're a kid who begs you (begs you!) to take a flier on them, and you have to spend way too much time with her, but it's worth it because you believe she's talented, even if you know the story isn't going to garner a big audience.
All this to say: As a rule of thumb, it sucks to take free work from people who are freelancing for a living. Agreed. But this is not a law of the universe and I would hate to see this imposed on me by anybody out of an obligation to a theoretical journalism where this hurts everybody. Can't we take it case by case?
Some people reading this might say: This new world of digital journalism sucks. Hey, I agree sometimes! Some days, I'd much rather be out reporting on the latest world-shaking event that I discovered. I'd love to take six months (or hell, six weeks) writing one story while pulling in six figures. SIGN ME THE EFF UP FOR THIS JOB.
But the economics of these jobs were always bizarre. Many magazines have been funded by wealthy people who were willing to take moderate losses. (Thank you to all of you.) Or Conde Nast could suck money out of its newspapers to feed into its glorious magazine operations. Nevermind that back at the newspapers they kept people working for nothing at podunk papers that also happened to make crazy bank with their classified ads. Any time I imagine the glamorous world of writing for The Atlantic or The New Yorker or Harper's in 1968 or 1978, I remember that most journalists were going to homecoming football games and writing about the king and queen. Most journalists were humping around the local garden show and talking about trends in petunia horticulture. Most journalists were doing things that no one really wanted to do, but they did it anyway for money and for a shot at the show which almost never came. I respect the hell out of those journalists working at those local papers. They were doing the stuff that, at least within certain empires, that let the magazine editors have lunch at Balthazar's (or insert actual appropriate New York lunch spot).
And as for the magazines themselves, they had relatively small staffs of people who stuck around for a long, long time. Who wouldn't? You could pay good money for great work from awesome writers, and your friends, and your friends who were awesome writers. They loved you for it. But who really got those jobs anyway? Looking at the staff rosters, I'm pretty sure it wouldn't have been me, back then.
So, yeah, the economics of our business are terrible in some ways. And like everything else, the worst of it falls on the workers, the people making the widgets, doing the journalism, making the beds. The money gets sucked upwards and the work gets pushed down.
But you know, even when you have a generous owner who is not trying to make a gazillion dollars and skim the cream, this game is still really, really hard. You still have limited funds. You still can't pay freelancers a living wage. The only strategy that makes sense is to hire some people. Then, you learn from each other (thanks, Megan Garber!). You work hard. You write good stuff. You comfort each other when people are huge a-holes in the comments. You catch typos for each other. You come up with jokes on Gchat. You figure out who has the golden touch with headlines (Derek Thompson is a certifiable genius at this). You make friends on the print side (Kate Julian! I know I owe you another Q&A candidate) and try to learn their game. You stare at Chartbeat and ask yourself, "Why am I doing this? It is two in the morning and I should be asleep and even my cat is giving me the stinkeye."
And then, you hope hope hope that this amounts to something sustainable. Because I owe it to this institution to help ensure its survival. I'll be damned if The Atlantic dies with my generation, if all that is left of it when I leave is some moldering books and cold servers. Quite possibly, I would get to the gates of heaven and Ida Tarbell would be sitting there like, "Wait, wait, *you're* one of those guys who let The Atlantic die?" And poof: trapdoor in the clouds, burning in hell for all eternity. Actually, strike that, I'd probably get stuck in purgatory rewriting press releases about corporate sustainability, forced to eat tuna sandwiches every day for lunch.
Anyway, the biz ain't what it used to be, but then again, for most people, it never really was. And, to you Mr. Thayer, all I can say is I wish I had a better answer.
Is the way we talk about the human mind messing with our ability to think about it clearly?
U.S. National Library of Medicine
The philosopher Colin McGinn is a tough book reviewer. He looks like a cop in an old movie about cops and robbers, and writes like one. And when he decided to take down Ray Kurzweil's new book, How to Create a Mind: The Secret of Human Thought Revealed, he pulled no punches. First, he runs through Kurzweil's "pattern recognition theory of the mind":
One cannot help noting immediately that the theory echoes Kurzweil's professional achievements as an inventor of word recognition machines: the "secret of human thought" is pattern recognition, as it is implemented in the hardware of the brain. To create a mind therefore we need to create a machine that recognizes patterns, such as letters and words. ...The process of recognition, which involves the firing of neurons in response to stimuli from the world, will typically include weightings of various features, as well as a lowering of response thresholds for probable constituents of the pattern. Thus some features will be more important than others to the recognizer, while the probability of recognizing a presented shape as an "E" will be higher if it occurs after "APPL."
These recognizers will therefore be "intelligent," able to anticipate and correct for poverty and distortion in the stimulus. This process mirrors our human ability to recognize a face, say, when in shadow or partially occluded or drawn in caricature.
Then the assault begins. First, McGinn states that Kurzweil's whole theory is wrong, just on its face: "that claim seems obviously false."
But the fascinating part of the critique is why Kurzweil's theory can seem plausible. And that comes down to the language Kurzweil (and other people) employ in writing about neuroscience. McGinn calls it "homunculism," the erroneous attribution of human-like qualities to pieces of a human. And it generates the illusion that we understand how synapses firing leads to an appreciation for rock 'n roll.
[H]omunculus talk can give rise to the illusion that one is nearer to accounting for the mind, properly so-called, than one really is. If neural clumps can be characterized in psychological terms, then it looks as if we are in the right conceptual ballpark when trying to explain genuine mental phenomena--such as the recognition of words and faces by perceiving conscious subjects. But if we strip our theoretical language of psychological content, restricting ourselves to the physics and chemistry of cells, we are far from accounting for the mental phenomena we wish to explain. An army of homunculi all recognizing patterns, talking to each other, and having expectations might provide a foundation for whole-person pattern recognition; but electrochemical interactions across cell membranes are a far cry from actually consciously seeing something as the letter "A." How do we get from pure chemistry to full-blown psychology?
McGinn goes on:
Why do we say that telephone lines convey information? Not because they are intrinsically informational, but because conscious subjects are at either end of them, exchanging information in the ordinary sense. Without the conscious subjects and their informational states, wires and neurons would not warrant being described in informational terms.
The mistake is to suppose that wires and neurons are homunculi that somehow mimic human subjects in their information-processing powers; instead they are simply the causal background to genuinely informational transactions.
I find McGinn persuasive on this point. But I'm much more interested in homunculus language from the reader perspective. We can use the presentation of this language to find areas where we might be fooling ourselves into thinking we know more than we do.
Here's my biotech reading list. I'd love your help fleshing it out.
I've spent the last few weeks creating a syllabus for myself on the world -- people, techniques, theory, history -- of biotechnology. I've talked with some scholars, accepted some Amazon recommendations, and done some rummaging around in bibliographies, but I'm only getting started. I thought I'd list my recent acquisitions here in hopes that you'll help me flesh my little self-taught course out. You know how to get a hold of me: comments here, @alexismadrigal, or amadrigal[at]theatlantic.com.
(Oh, and I'm also looking for journals and blogs that I should be keeping an eye on.)
Right now, I'm pretty heavy on the theoretical and anthropological investigations of biotechnology. I'd like more basic texts on the techniques and some more scientist/technologist accounts of their own work and how it's shaped their thinking. If you can't tell from the readings, what I'm most interested in is the nature of life from the perspective of the people who manipulate it.
Here's the list, sorted alphabetically by author last name:
Life as Surplus: Biotechnology and Capitalism in the Neoliberal Era by Melinda Cooper
Dolly Mixtures: The Remaking of Genealogy by Sarah Franklin
Invisible Frontiers: The Race to Synthesize a Human Gene by Stephen Hall
How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Informatics by N. Katherine Hayles
Alien Ocean: Anthropological Voyages in Microbial Seas by Stefan Helmreich
Genentech: The Beginnings of Biotech by Sally Smith Hughes
Refiguring Life by Evelyn Fox Keller
Secrets of Life, Secrets of Death: Essays on Language, Gender, and Science by Evelyn Fox KellerMaking Sense of Life: Explaining Biological Development with Models, Metaphors, and Machines by Evelyn Fox Keller
Culturing Life: How Cells Became Technologies by Hannah Landecker
The Mansion of Happiness: A History of Life and Death by Jill Lepore
What Is Life? by Lynn Margolis and Dorion Sagan
Making PCR: A Story of Biotechnology by Paul Rabinow
Biocapital: The Constitution of Post-Genomic Life by Kaushik Sunder Rajan
The Scientific Life: A Moral History of a Late Modern Vocation by Steven Shapin
Liminal Lives: Imagining the Human at the Frontiers of Bioscience by Susan Merrill Squier
Making Parents: The Ontological Choreography of Reproductive Technologies by Charis Thompson
Over the next six months or so, we're going to see an explosion of new ways of interacting with computers, televisions, and mobile devices.
The interfaces are coming! Over the next six months or so, we're going to see an explosion of new ways of interacting with computers, televisions, and mobile devices. Many of them are radical departures from the way things have been done, which is exciting. I'll run several down in this post that are slated to come out this year.
Almost all of them will fail quickly and be forgotten forever. But there's a chance that one of these new technologies will hit a consumer sweetspot and become enshrined in our lives like the remote control or the keyboard.
For decades after the creation of the graphical-user interface and the widespread adoption of the mouse, the computing interaction paradigm was largely static. You had a keyboard and a pointer on the screen that you controlled in some way, usually a mouse, but sometimes a touchpad or pointing stick (aka "red nubby thing on old Thinkpads").
Try as people might, and people like Microsoft's Bill Buxton have archived the evidence that they did, people liked the basic computing setup. It was fast and accurate, familiar and decently intuitive.
But the iPhone -- and the brilliant iOS software and declining multi-touch display prices -- cracked that computing paradigm wide open. And for the last half a decade, touchscreens have more or less taken over for mobile computing. At the same time, gesture interfaces from Nintendo and Microsoft in the gaming space exploded, marking a serious move away from the traditional controller for non-hardcore gamers.
That's given a lot of new hardware interface designers hope, not to mention a plausible story to tell venture capitalists. Add a dash of Kickstarter funding and Sergey Brin's interest and you have an explosion of new possibilities. Here are five that I've noticed. What's fascinating is that all are slated to be out this year:
J.C. Penney employees are reported to have watched five million YouTube videos from the office during the month of January.
The number of YouTube videos employees watch is not exactly the kind of number tracked by corporate analysts or released by companies. Suffice to say, on the evidence of being a human being in the white-collar workforce, I have long been sure that the number of YouTube videos watched on the clock is astronomical, belonging to the category of numbers so large that you should write them like this: 107.
But it's hard to calculate. There are too many confounding variables. YouTube says it streams more than 4 billion videos per day, with about 40 percent coming from the US, so 1.6 billion American streams each day. Let's assume there are 300 million Americans who all watch exactly the same amount of videos each day. That'd be five per day per person in the United States. But how many come outside of work? How many come from the country's 55 million white-collar workers during the hours between 8 and 6pm? We just can't know.
But, a factlet in a Wall Street Journal article on retailer J.C. Penney's struggles confirms that, under the right circumstances, desk jockeys can be extreme consumers of online video:
During January 2012, the 4,800 employees in Plano had watched five million YouTube videos during work hours, said Michael Kramer, a former Apple executive brought in by Mr. Johnson as chief operating officer.
As New York Times Magazine Hugo Lindgren noted on Twitter, that's 50 videos per person per day. J.C. Penney's Chief Operating Officer called the company's culture at the time "pathetic." But I wouldn't be surprised if the white-collar worker average was 10 videos a day or even more. Nine hours a day is a long time to stare at a screen and Aunt Laura keeps sending such funny clips!
Sign up to receive our free newsletters