Alexis Madrigal is a senior editor at The Atlantic, where he oversees the Technology channel. He's the author of Powering the Dream: The History and Promise of Green Technology. More
The New York Observer calls Madrigal "for all intents and purposes, the perfect modern reporter." He co-founded Longshot magazine, a high-speed media experiment that garnered attention from The New York Times, The Wall Street Journal, and the BBC. While at Wired.com, he built Wired Science into one of the most popular blogs in the world. The site was nominated for best magazine blog by the MPA and best science Web site in the 2009 Webby Awards. He also co-founded Haiti ReWired, a groundbreaking community dedicated to the discussion of technology, infrastructure, and the future of Haiti.
He's spoken at Stanford, CalTech, Berkeley, SXSW, E3, and the National Renewable Energy Laboratory, and his writing was anthologized in Best Technology Writing 2010 (Yale University Press).
Madrigal is a visiting scholar at the University of California at Berkeley's Office for the History of Science and Technology. Born in Mexico City, he grew up in the exurbs north of Portland, Oregon, and now lives in Oakland.
Computer History Museum
Zilog was founded by Intel veterans Federico Faggins and Ralph Ungermann in 1974. Their first microprocessor, the Z80, was a hit. Intel's products, the company's Dave House admitted, "kind of got stomped on by Zilog with their Z80." But Zilog's success brought trouble in an unlikely form: Exxon.
First, Exxon made a large investment in exchange for 51 percent of the company. Then, they bought Zilog clean out, despite its next-generation 16-bit microprocessor, the Z8000, not having tremendous success. It was downhill from there. And by 1985, having invested a billion dollars, Exxon sold the company back to some of its employees and the investment firm Warburg Pincus.
I ran across this story while reporting on the history of Intel, and in those key days during the early 1980s, right before IBM decided to use Intel chips, Zilog was providing legitimate competition for the now giant company.
What I soon discovered, though, was that Exxon was not alone in trying to make money off the computing boom. As Forbes recalled in 1997, many companies went chasing tech growth and came up empty-handed or worse.
There was something about the way these companies managed their businesses that seemed destined to run highly innovative chip companies into the ground.
It seemed like a good idea at the time. Schlumberger was flush with cash from its oil well logging business. Fairchild Camera & Instrument was a pioneer in the semiconductor industry and in need of capital. Semiconductor chips didn't seem too far afield from Schlumberger's expertise. Didn't oil well measuring tools use electronics heavily? Schlumberger wrote out a check for $425 million to purchase Fairchild.
This was in 1979, just before the great boom in personal computers got underway. Schlumberger should have made billions of dollars from its acquisition. But it didn't. In 1987 it sold Fairchild at a $220 million loss to National Semiconductor.
You could make a long list of merger fiascos in computers and electronics: Xerox paying $900 million for mainframe manufacturer Scientific Data Systems in 1969. Exxon buying Zilog, a microprocessor company, and then some word processor companies, into which it sank $1 billion before selling and writing off the businesses. AT&T losing $4 billion on NCR during a bull market.
Faggins, for his part, still sounds a little bitter about it all. He left Intel disgruntled and wanted to take the company on. "And we almost succeeded," he recalled in an oral history. "The Z80 was our first product and it became very successful. It took the business away from the [Intel] 8080. Zilog was winning in the market, but then IBM's choice to adopt the Intel 8086 reversed the direction. That was the turning point. By the way, the key reason IBM chose Intel was that our sole investor, Exxon Enterprises, had declared war on IBM."
But it's not just sour grapes. G. Dan Hutcheson, president of VLSI Research, told Forbes, "Zilog might have been what Intel is today, if Exxon hadn't tied them down."
What exactly did they do wrong (aside from tiffing with IBM)? Bernard Peuto, who was at Zilog in the early years and later went on to Sun Microsystems, had a simple answer for what tended to go wrong: The big companies gave Silicon Valley upstarts too much money.
"Quite frankly I blame Exxon," Peuto said in a panel about Zilog at the Computer History Museum. "Exxon essentially choked us with money. They basically gave us too much money and too many directions, which we then kind of went into and in some sense there are times where you have to refuse and that's very hard to do when somebody gives you dollars. But the reality was the reason we were doing too many things is because we could afford to do it because Exxon was kind of giving us the check. That's my personal view. The elephant [had grown too] complex."
And maybe that's the history lesson we can apply today: Too much money too fast breeds too little focus and too much complexity.
For all the talk of artificial intelligence and all the games of SimCity that have been played, no one in the world can actually simulate living things. Biology is so complex that nowhere on Earth is there a comprehensive model of even a single simple bacterial cell.
And yet, these are exciting times for "executable biology," an emerging field dedicated to creating models of organisms that run on a computer. Last year, Markus Covert's Stanford lab created the best ever molecular model of a very simple cell. To do so, they had to compile information from 900 scientific publications. An editorial that accompanied the study in the journal Cell was titled, "The Dawn of Virtual Cell Biology."
In January of this year, the one-billion euro Human Brain Project received a decade's worth of backing from the European Union to simulate a human brain in a supercomputer. It joins Blue Brain, an eight-year-old collaboration between IBM and the Swiss Federal Institute of Technology in Lausanne, in this quest. In an optimistic moment in 2009, Blue Brain's director claimed such a model was possible by 2019. And last month, President Obama unveiled a $100 million BRAIN Initiative to give "scientists the tools they need to get a dynamic picture of the brain in action." An entire field, connectomics, has emerged to create wiring diagrams of the connections between neurons ("connectomes"), which is a necessary first step in building a realistic simulation of a nervous system. In short, brains are hot, especially efforts to model them in silico.
But in between the cell-on-silicon and the brain-on-silicon simulators lies a fascinating and strange new project to create a life-like simulation of Caenohabditis elegans, a roundworm. OpenWorm isn't like these other initiatives; it's a scrappy, open-source project that began with a tweet and that's coordinated on Google Hangouts by scientists spread from San Diego to Russia. If it succeeds, it will have created a first in executable biology: a simulated animal using the principles of life to exist on a computer.
"If you're going to understand a nervous system or, more humbly, how a neural circuit works, you can look at it and stick electrodes in it and find out what kind of receptor or transmitter it has," said John White, who built the first map of C. elegans's neural anatomy, and recently started contributing to the project. "But until you can quantify and put the whole thing into a computer and simulate it and show your computer model can behave in the same way as the real one, I don't think you can say you understand it."
For example, when researchers touch a worm on the head and it responds by turning and moving backwards, what exactly is happening there? What molecular mechanisms coordinate the firing of neural networks that initiate and complete this complex behavior? This month, a paper came out in PLOS Biology describing that exact sequence as recorded in live C. elegans. But it's one of very few studies like that.
More broadly, OpenWorm raises fascinating questions about what we mean when we say something is alive. If and when this project succeeds in modeling the worm successfully, we'll be faced with a new and fascinating concept to think with: a virtual organism. Imagine downloading the worm and running it in a virtual petri dish on your computer. What, exactly, will you be looking at? Will you consider it to be alive? What would convince you?
Perhaps creations like the digital C. elegans will start to break down our binary conception of the matter in the world as either living or not living. We'll discover that we can create systems that exist in-between these two spheres, or that certain aspects of life as we know it are not required to meet our definition of being alive.
"I suspect that we'll recognize that living systems are far-from-equilibrium molecular systems that are carrying out very specific sophisticated physical patterns and have some ability to sustain themselves over time," OpenWorm's organizer Stephen Larson wrote to me. "Thinking about it that way makes me go beyond a black and white notion of 'alive' to a more functional perspective -- living systems are those which self sustain. Our goal is to aggregate more of the biological processes we know that help the worm to self-sustain than have ever been aggregated before, and to measure how close our predictions of behavior match real living behavior, more than it is to shoot for some pre-conceived notion of how much 'aliveness' we need."
* * *
It's a complex, ambitious project, to say the least. White called it "bold." Yet it all began with a tweet.
In early 2010, software engineer Giovanni Idili sent a tweet to the Twitter account for The Whole Brain Catalog, a project to bring mouse brain data together into more usable formats. He said, as if on a lark, "@braincatalog new year's resolution: simulate the whole C. Elegans brain (302 neurons)!" One of the Brain Catalog's founders, Stephen Larson, was scanning the @-replies and offered his assistance, "So, do you want any help with that? How are you going to do it?"
Beginning with a 1997 proposal at the University of Oregon, there have been several attempts to simulate worms. Some focused on the body alone. Others tried to simulate the worm's behavior through machine learning, with no attempt at a biologically realistic nervous system. Idilli and Larson wanted to go beyond these early efforts. When Larson was at MIT, he was influenced by Rodney Brooks, the director of the Computer Science and Artificial Intelligence Laboratory at the university (and the creator of the Roomba!). Brooks proposed the idea that if you want artificial intelligence, it should be situated within an environment. Is his 1990 paper, "Elephants don't play chess," he argued that "to build a system that is intelligent it is necessary to have its representations grounded in the physical world."
The great thing about C. elegans, though, is that its physical world in the laboratory is completely standardized and well known. The worms live in petri dishes with agar. If any environment can be modeled by a computer, it is a petri dish with agar. The nascent OpenWorm team could build a realistic virtual environment for a digital C. elegans.
Which meant that their little worm brain -- the target of Idili's initial suggestion -- needed a body. For that, they reached out to Christian Grove at CalTech, who donated a 3D atlas of the worm to get them started.
They had a map of the brain, a model of the body, and a pretty good idea of how to build the environment. Their artificial intelligence might not be embodied, but it would be "situated." The brain would direct the body and the body would interact with the environment, and all three pieces would be connected by the intricate feedback loops that permeate biology.
Their goal became clear: they should build, as they put it on the website, "a fully digital lifeform -- a virtual nematode -- in a completely open source manner."
Three years and 31 Google Hangouts later, OpenWorm is a going concern with Larson at the helm and a team spread across the continents. Alexander Dibert, Sergey Khayrulin, and Andrey Palyanov contribute software development from Russia, along with Matteo Cantarelli in the UK and Timothy Busbice in California. Neuroscientists Mike Vella and Padraig Gleeson are stationed at Cambridge and University College London, respectively. And of course, Idili in Ireland and Larson in San Diego. There is no central lab, nor could there be.
The OpenWorm team has broken down this immense task into five component systems. First, at the base of the project, they have a list of the 959 cells in the C. elegans body. The list includes a rough idea of what each of the cells does, thanks to decades of research on the worm. Then, they've got a life simulation engine they call Geppetto (shout out to Pinocchio!), which is the platform on which all the other software runs. Third, there is the simulated physical body. They are creating an algorithm for worm mechanics that can generate realistic muscle movements. Fourth, they have an electrical model for the muscles. What are the signals that they send and receive to move the animal? Last but not least, they must animate the connectome, the wiring diagram for the worm's nervous system.
Their team has been making steady progress, but being at the leading edge means that they're also at the leading edge of encountering the problems that any effort to simulate a brain is going to have.
For an outsider and non-biologist, simulating the C. elegans brain seems like it should be relatively easy. You've got the map of the neurons. You know where all the cells go in the body of the worm. You know how it behaves under all these experimental conditions. What's so hard about simulating its behavior?
We don't know how to simulate every single protein and nucleic acid in a cell. And even if we could, it would be computationally staggering to try to model each and every cell in the worm down to that atomic level, figuring out each and every molecular interaction inside these densely packed cells. No experiments can output that data.
You could eschew biological realism entirely. It would be relatively trivial to create a CGI worm that *looked* realistic. Perhaps one could make it behave realistically by running machine learning on worm behavioral data in particular situations. But that wouldn't be a very interesting simulation of the processes of life. It certainly wouldn't be a model that would help biologists much.
So, between realistically simulating every atom and realistically simulating nothing, OpenWorm has had to make some tradeoffs. Larson thinks about it like this. Imagine a graph. Along the X-axis, you've got the level of biological realism baked into the simulation. Do its cells do what real cells do? Which parts of the cells do what their biological counterparts do? Do the neurons work like biological neurons? And the along the Y-axis, you've got the behavioral realism. Does this thing do wiggle like a real worm? Does it respond to chemicals like a real worm? Does it attempt to and succeed in reproducing?
The problem is, as Larson explains, "we don't know how far you have to go to the right on the X-axis to go [a certain amount] up on the y-axis." They don't know what level of biological realism will get them to what level of behavioral realism.
And, buried in that question is a deeper one: When can we say, or scream, raising our twisted fingers to the sky as lightning flashes above, "It's alive!"?
For example, they are using a model of how neurons work called the Hodgkin-Huxley model, which garnered its creators a Nobel Prize. If they were to add more detailed simulations of the neurons, would that meaningfully add to the behavioral realism of the organism as a whole? Or can the principles of neuronal firing and propagation be abstracted from their biological embodiment without losing any behavioral fidelity?
Making decisions about these tradeoffs forms the core of the project. All biological simulation projects to date have faced similar challenges. Take the now defunct Canadian project called (cue techno!) Project CyberCell.
Led by Michael Ellison of the University of Alberta, the team wanted to create a simple E. coli simulation. The molecules inside cells form these fantastically complex structures that are constantly moving around and changing shape. Modeling all that takes enormous computational horsepower, and that's assuming you know exactly how each protein is going to fold. It was too much to attempt. So, instead, CyberCell represented each molecule as a sphere -- "Every ribosome, every lipid molecule, every metabolite" Ellison said -- of approximately the right size. Then, they simply assigned each sphere certain probabilities of reacting with other spheres. "If the right enzyme connects with the right small molecule, there was a certain probability that a chemical reaction may take place," he explained.
Is that realistic? Not really. But it made it possible to start experimenting. "We still don't know enough about the living organisms," Ellison told me. "50 percent of E. coli is still a blackbox."
That figure might be even larger for C. elegans, but it's still the best characterized animal that researchers have got. It remains the only organism for which a complete connectome actually exists. Working in Nobel laureate Sydney Brenner's Laboratory of Molecular Biology in Cambridge during the 1970s, White and his team spent 13 years creating the wiring diagram. Electron microscopist Nichol Thomson cut the one-milimeter worms into 20,000 very thin slices, which -- because the worms are transparent -- he could then image with his microscope. "The thing that gave [Thomson] the biggest pleasure of all was to cut a long series of quality images," White told me.
Then, with White's direction, a technician named Eileen Southgate painstakingly labeled each nerve cell and connection in the micrographs. Through their work, they discovered C. elegans has 302 neurons that form approximately 10,000 connections. And Southgate traced each and every one. "I found out several years into her collaboration that as a hobby, she put huge jigsaw puzzles together," White recalled. "She has a wonderful visual memory." She began work at the lab when she was 16 years old and stayed until she retired.
The brain map was only one of several scientific feats accomplished with C. elegans. The worm was also the first multicellular organism to have its genome sequenced. And scientists precisely tracked its development from embryo to adulthood. There's even a database (WormBase) that contains more complete data about the organism's functioning at the molecular level than one could find for any other animal. Dozens of labs work with this little species.Brenner handpicked the organism precisely for its amenability to study, calling the worm "nature's gift to science." University of Kansas worm biologist, Brian Ackley, likes to joke that Brenner created C. elegans in a lab "because he was tired of working on things that didn't have perfect biological criteria." They're tiny, transparent, reproduce quickly, have a small number of neurons, and each body is composed of exactly 959 total cells.
"Brenner planned to use the worm to discover how genes made bodies and then behavior," wrote Andrew Brown in a book on C. elegans. "And this was in 1965, before anyone had found and analysed a single gene for anything." It is only today, in 2013, that his disciples' disciples' are beginning to fulfill that original vision.
In a 1974 paper quoted in the talk he gave accepting the Nobel Prize for Medicine, Brenner put it like this, "Behavior is the result of a complex ill-understood set of computations performed by nervous systems and it seems essential to decompose the question into two," he wrote, "one concerned with the question of the genetic speciﬁcation of nervous systems and the other with the way nervous systems work to produce behaviour." In other words, how do genes build brains and how do brains direct bodies?
Now, finally, OpenWorm may be able to integrate the strains of research that began with Brenner into one simulation that, as it wiggles along in its digital petri dish, might be the first realistic virtual animal, a boon to research, and a Kurzweilian foreshadowing of the challenges humans face when we begin running life on silicon chips.
I asked several researchers whether simulating the worm was possible. "It's really a difficult thing to say whether it's possible," said Steven Cook, a graduate student at Yale who has worked on C. elegans connectomics. But, he admitted, "I'm optimistic that if we're starting with 302 neurons and 10,000 synapses we'll be able to understand its behavior from a modeling perspective." And, in any case, "If we can't model a worm, I don't know how we can model a human, monkey, or cat brain."
Ellison echoed that thought. "They stand a much better chance of success than the people working on mammalian brains," he said. White, who led the creation of the worm connectome, said OpenWorm "seemed appropriate really" as a way of integrating all the data that biologists were producing. And the Kansas worm scientist Ackley figured that even if OpenWorm didn't work, something like it would. "C. elegans is probably going to be the first or very close to the first [multicellular organism] to be simulated," he said
David Dalrymple, an MIT graduate student who has contributed to OpenWorm and is working on a worm brain modeling project of his own, pointed out what he sees as a limitation to the effort. OpenWorm has incorporated a lot of anatomical data -- the structures of the worm's nervous system and musculature -- described by scientists like White. But these studies were carried out with dead worms. They can't tell scientists about the relative importance of connections between neurons within the worm's neural system, only that a connection exists. Very little data from living animals' cells exist in the published literature, and it may be required to develop a good simulation.
"I believe that an accurate model requires a great deal of functional data that has not yet been collected, because it requires a kind of experiment that has only become feasible in the last year or two," Dalrymple told me in an email. His own research is to build an automated experimental apparatus that can gather up that functional data, which can then be fed into these models. "We're coming at the problem from different directions," he said. "Hopefully, at some point in the future, we'll meet in the middle and save each other a couple years of extra work to complete the story."
On the Y-axis, we have the number of units sold in a year. On the X-axis, we have the price of the device, beginning with the $10,000 IBM PC at the far left and extending to $100 on the far right. Then, he drew a diagonal line bisecting the axes. As Otellini sketched, he talked through the movements represented in the chart. "By the time the price got to $1000, sort of in the mid-90s, the industry got to 100 million units a year," he said, circling the $1k. "And as PCs continued to come down in price, they got to be an average price of 600 or 700 dollars and we got up to 300 million units." He traced the line up to his diagonal line and drew an arrow pointing to a dot on the line. "You are here," he said. "I don't mean just phones, but mainstream computing is a billlion units at $100. That's where we're headed."
"What I told our guys is that we rode all the way up through here, but what we needed to do was very different to get to [a billion units]... You have to be able to build chips for $10 and sell a lot of them."
"This is what I had to draw to get Intel to start thinking about ultracheap," Otellini concluded.
"How well do you think Intel is thinking about ultracheap?" I asked.
"Oh they got it now," he said, to the laughter of the press relations crew with us. "I did this in '05, so it's [been more than] seven years now. They got it as of about two years ago. Everybody in the company has got it now, but it took a while to move the machine."
Much of the rest of the story is dedicated to understanding precisely why the machine was so hard to move: the culture, the semiconductor industry conditions, etc. In a nutshell, in 2001, 80-90 percent of Intel's business was desktops for corporations with a plurality of revenue from the Americas. The world changed really, really quickly, and Intel's product cycles are more than three years long because it takes a long time to build a chip from the ground up.
So, what this chart shows, I think, is that you may be able to fault Otellini for not pulling the right levers within the vast Intel system, but you can't say he failed to see the ultracheap world coming.
Forty-five years after Intel was founded by Silicon Valley legends Gordon Moore and Bob Noyce, it is the world's leading semiconductor company. While almost every similar company -- and there used to be many -- has disappeared or withered away, Intel has thrived through the rise of Microsoft, the Internet boom and the Internet bust, the resurgence of Apple, the laptop explosion that eroded the desktop market, and the wholesale restructuring of the semiconductor industry.
For 40 of those years, a timespan that saw computing go from curiosity to ubiquity, Paul Otellini has been at Intel. He's been CEO of the company for the last eight years, but close to the levers of power since he became then-CEO Andy Grove's de facto chief of staff in 1989. Today is Otellini's last day at Intel. As soon as he steps down at a company shareholder meeting, Brian Krzanich, who has been with the company since 1982, will move up from COO to become Intel's sixth CEO.
It's almost certain that the chorus of goodbyes for Otellini will underestimate his accomplishments as the head of the world's foremost chipmaker. He's a company man who is not much of a rhetorician, and the last few quarters of declining revenue and income have brought out detractors. They'll say Otellini did not get Intel's chips into smartphones and tablets, leaving the company locked out of computing's fastest growing market. They'll say Intel's risky, capital-intensive, vertically integrated business model doesn't belong in the new semiconductor industry, and that the loose coalition built around ARM's phone-friendly chip architecture have bypassed the once-invincible Intel along with its old WinTel friends, Microsoft, Dell, and HP.
And yet, consider the case for Otellini. Intel generated more revenue during his eight-year tenure as CEO than it did during the rest of the company's 45-year history. If it weren't for the Internet bubble-inflated earnings of the year 2000, Otellini would have presided over the generation of greater profits than his predecessors combined as well. As it is, the company machinery under him spun off $66 billion in profit (i.e. net income), as compared with the $68 billion posted by his predecessors. The $11 billion Intel earned in 2012 easily beats the sum total ($9.5) posted by Qualcomm ($6.1), Texas Instruments ($1.8), Broadcom ($0.72), Nvidia ($0.56), and Marvel ($0.31), not to mention its old rival AMD, which lost more than a billion dollars.
"By all accounts, the company has been incredibly successful during his tenure on the things that made them Intel," said Stacy Rasgon, a senior analyst who covers the semiconductor industry at Sanford C. Bernstein. "Tuning the machine that is Intel happened very well under his watch. They've grown revenues a ton and margins are higher than they used to be."
Even Otellini's natural rival, former AMD CEO Hector Ruiz, had to agree that Intel's CEO "was more successful than people give him credit for."
But, oh, what could have been! Even Otellini betrayed a profound sense of disappointment over a decision he made about a then-unreleased product that became the iPhone. Shortly after winning Apple's Mac business, he decided against doing what it took to be the chip in Apple's paradigm-shifting product.
"We ended up not winning it or passing on it, depending on how you want to view it. And the world would have been a lot different if we'd done it," Otellini told me in a two-hour conversation during his last month at Intel. "The thing you have to remember is that this was before the iPhone was introduced and no one knew what the iPhone would do... At the end of the day, there was a chip that they were interested in that they wanted to pay a certain price for and not a nickel more and that price was below our forecasted cost. I couldn't see it. It wasn't one of these things you can make up on volume. And in hindsight, the forecasted cost was wrong and the volume was 100x what anyone thought."
It was the only moment I heard regret slip into Otellini's voice during the several hours of conversations I had with him. "The lesson I took away from that was, while we like to speak with data around here, so many times in my career I've ended up making decisions with my gut, and I should have followed my gut," he said. "My gut told me to say yes."
In person, Otellini is forthright and charming. For a lifelong business guy, his affect is educator, not salesman. He is the kind of guy who would recommend that a junior colleague read a book like Scale and Scope, a 780-page history of industrial capitalism. To his credit, he fired back responses to nearly all my questions about his tenure, company, and industry at a dinner during CES in Las Vegas and later at Intel's headquarters. And when he wasn't going to answer, he didn't duck, but repelled: "I'm not going to talk about that."
On stage, however, during the heavily produced keynote talks CEOs are now required to give, Otellini's persona and company do not inspire legions of cheering fans. When he steps on stage, there is no Jobsian swell of emotion, no one screams out, "We love you, Paul!" And yet, this is the outfit that pushes the leading edge of chip innovation. They are the keepers of (Gordon) Moore's Law, ensuring that the number of transistors on an integrated circuit continues to double every couple years or so. If Otellini's CV is lacking a driverless car project or rocketship company, it may be because the technical challenges Intel faces require a different kind of corporation and leader.
"He's super low-key guy. He's not a Steve Jobs. He's not a Bill Gates. But his contribution has been just as big," said the new president of Intel, Renee James, who has worked with Otellini for 15 years.
His management secret was his own exemplary drive, discipline, and humility. He came in early, worked hard, and demanded excellence of himself. "He didn't yell and scream. He never dictated. He never asked me to come in on a Sunday. He never asked me to stay late on a Friday. But he had this way of getting you to rise to the occasion," said Navin Shenoy, who served as Otellini's chief-of-staff from 2004 to 2007. "He'd challenge you to do something that we'd all be proud of."
Peter Thiel might complain that the Valley hasn't invented rocket packs and flying car because investors and entrepreneurs have been focused on frivolous nonsense. But Paul Otellini's Intel spent $19.5 billion on R&D during 2011 and 2012. That's $8 billion more than Google. And a substantial amount of Intel's innovation comes from its manufacturing operations, and Intel spent another $20 billion building factories during the last two years. That's nearly $40 billion dedicated to bringing new products into being in just two years! These investments have continued because of Otellini's unshakeable faith that eventually, as he told me, "At the end of the day, the best transistors win, no matter what you're building, a server or a phone." That's always the strategy. That's always the solution.
Intel's kind of business and Otellini's brand of competent, quiet management are not in fashion in Silicon Valley right now. And yet, almost no one has can claim the Valley more than Otellini. Every day for four decades -- in a career that spans the entirety of the PC era -- Intel's Santa Clara headquarters have been the center of his working world.
As we stood outside Otellini's corner cubicle, marked by a makeshift waiting room with a television, a couple of display cases, and a plucky plant, I asked him to reflect on what the end might feel like. "It is strange. I've been pinning this badge on every day for 40 years," he said. "But I won't miss the commute from San Francisco." After making thousands of trips down 101 and racking up 1.2 million miles on United through hundreds of trips around the world, he seemed ready to stop going.
The Many Computer Revolutions
Despite the $53 billion in revenue and all the company's technical and business successes, the question on many a commentator's mind is, Can Intel thrive in the tablet and smartphone world the way it did during the standard PC era?
The industry changes ushered in by the surge in these flat-glass computing devices can be seen two ways. Intel's James prefers to see the continuities with Intel's existing business. "Everyone wants the tablet to be some mysterious thing that's killing the PC. What do you think the tablet really is? A PC," she said. "A PC by any other name is still a personal computer. If it does general purpose computing with multiple applications, it's a PC." Sure, she admitted, tablets are a "form factor and user modality change," but tablets are still "a general purpose computer."
On the other hand, the industry changes that have surrounded the great tablet upheaval have been substantial. Consumer dollars are flowing to different places. Instead of Microsoft's operating system dominating, Apple and Google's do. The old-line PC makers have struggled, while relative upstarts such as Samsung and Amazon have pushed millions of units.
The chip challenges are different as well. Rather than optimizing for the maximum computational power of a device, it's energy efficiency that's most important. How much performance can a processor deliver per watt of power it sucks from a too-small battery?
The semiconductor industry itself has seen perhaps even larger changes. In the early days of Silicon Valley, chipmakers had their foundries right there in the Valley, hence the name. During the 1980s, Japanese chipmakers battled American ones, beating them badly until Intel turned the tide in the latter half of the decade. The factories moved out of the valley to places like domesticallyChandler, Arizona and Folsom, California, as well as to Asia, mostly Taiwan.
Meanwhile, each generation of chips got technically more challenging and the foundries required to build them got more expensive. Chipmakers needed to sell massive amounts of chips in order to make up the huge capital equipment costs. The industry became cruelly cyclical, booming and busting with a regularity that defied managerial skill. For all those reasons and more, during the last twenty years, the chipmaking industry has been consolidating. Almost all semiconductor companies are now "fabless," choosing to outsource the production of their silicon to Taiwan Semiconductor Manufacturing Company (TSMC), United Microelectronics Corporation (UMC), or GlobalFoundries, a venture backed by the United Arab Emirates. The new fabless chip designers don't have to build plants, which allows them to have more stable businesses, but they lose the ability to gain competitive advantage by tweaking production lines. The transition to this state of affairs killed off many companies and allowed others to thrive.
Add it all up and there are only a few chipmakers left standing. The aforementioned contract manufacturers like TSMC, Samsung, and, of course, Intel.
These two structural trends at the consumer and industry levels intersect at a formerly obscure British company called ARM Holdings. Originally founded as a partnership between Acorn Computers (remember them?), VLSI (remember them?), and Apple, ARM now just creates and licenses the chip architectures that other companies tweak and have manufactured. In a sense, they sell a chip "starter kit" that companies like Apple, Qualcomm, Broadcom, Marvel, and Nvidia build upon to create their own products.
Chips based on the ARM intellectual property are generally not as high-performance as Intel's, but they're fantastically energy efficient. While ARM did make chips for Apple's ill-fated Newton device, in the early 2000s, ARM became the dominant architecture supplier to the so-called "embedded" market. These chips are not general computing devices, but have specific jobs in (for example) cars, hard drives, and factories. This specialization is also one of the reasons that ARM chips are cheap. An Intel microprocessor could sell for $100. ARM-based chips might sell for $10, and often less than a dollar. In the first quarter of this year, 2.6 billion chips using ARM's architecture were shipped.
The two key attributes of ARM's architecture -- energy efficiency and low cost -- developed before cell phone phones, but they were exactly what mobile designers were looking for. As the smartphone market exploded, so did ARM's share price as investors realized what a key node ARM had become in the burgeoning computer-on-glass phone and tablet market.
For companies who are trying to decide whether to go with Intel or an ARM-licensee, it's a bit like being asked whether you'd rather deal with Switzerland or the Aztec empire. "With ARM, when you are tired of Qualcomm you can go to NVIDIA or another company," Linley Gwennap, the boss of the Linley Group, a research firm, told The Economist last year. "But in Intel's case, there's nobody else on its team."
ARM-based designs are now found in more than 95 percent of smartphones. ARM may not be dominant in the way Intel is dominant in PCs, but the system it underpins is.
Simon Segars is the man who will have to deal with the fallout from all of ARM's successes. He begins as the new CEO of the company on July 1. I met him after he spoke on a panel about "multi-industry business ecosystems" at the Parc 55 hotel in the heart of San Francisco. He was tall and genial, happy to patiently and thoroughly explain why ARM had found itself in possession of so many friends and so much good fortune.
"I can genuinely say that our approach is to work within an ecosystem that is a healthy ecosystem. By that I mean the people in it are making money from what they do," he said. "We get questions on a regular basis, Why don't you quadruple your royalty rates? Because you're so strong, what are you customers going to do? We could do that and we could probably enjoy some more revenue for some time, but our customers would go off and do something else or have less healthy businesses. If we tried to extract lots of money out of the ecosystem, we'd have less companies supporting the ARM architecture and that would limit where it could go."
ARM is a company that finds itself in the right place at the right time with a philosophy of innovation that lots of companies want to believe in.
"Through the '90s and early 2000s, we saw an explosion in the number of people who could build a chip. That led to a lot of innovation and all the electronic devices that we see today," Segars said. "The role we've played is providing this core building block, this microprocessor, that many of these devices require. We've provided that in a very cost-effective way to anybody who wanted it. And that's allowed people to put intelligence into devices that they couldn't have afforded to do because they would have had to do it all themselves."The Mobile Mystery: What Did Otellini See and When Did He See It?
Many of the structural changes that occurred in these industries now seem predictable. It feels like somebody else could have positioned Intel differently to take advantage of these trends. At the very least, Otellini should have seen where the changes were leading the silicon world.
And the thing is, he did. He just wasn't able to get the Intel machine turning fast enough. "The explosion of low-end devices, we kinda saw as a company and for a variety of reasons weren't able to get our arms around it early enough," he admitted.
It was Otellini, after all, who had made the call to start developing the very successful low-power Atom processor for mobile computing applications. And it was Otellini, who upon ascending to the throne, drew a diagram that I'll call the Otellini Corollary to Moore's Law at the company's annual Strategic Long Range Planning Process meeting, or SLRP. He duplicated it for me in an appropriately anonymous Intel conference room, calling it half-jokingly "the history of the computer industry in one chart."
On the Y-axis, we have the number of units sold in a year. On the X-axis, we have the price of the device, beginning with the $10,000 IBM PC at the far left and extending to $100 on the far right. Then, he drew a diagonal line bisecting the axes. As Otellini sketched, he talked through the movements represented in the chart. "By the time the price got to $1000, sort of in the mid-90s, the industry got to 100 million units a year," he said, circling the $1k. "And as PCs continued to come down in price, they got to be an average price of 600 or 700 dollars and we got up to 300 million units." He traced the line up to his diagonal line and drew an arrow pointing to a dot on the line. "You are here," he said. "I don't mean just phones, but mainstream computing is a billion units at $100. That's where we're headed."
"What I told our guys is that we rode all the way up through here, but what we needed to do was very different to get to [a billion units]... You have to be able to build chips for $10 and sell a lot of them."
"This is what I had to draw to get Intel to start thinking about ultracheap," Otellini concluded.
"How well do you think Intel is thinking about ultracheap?" I asked.
"Oh they got it now," he said, to the laughter of the press relations crew with us. "I did this in '05, so it's [been more than] seven years now. They got it as of about two years ago. Everybody in the company has got it now, but it took a while to move the machine."
It took a while to move the machine. The problem, really, was that Intel's x86 chip architecture could not rival the performance per watt of power that designs licensed from ARM based on RISC architecture could provide. Intel was always the undisputed champion of performance, but its chips sucked up too much power. In fact, it was only this month that Intel revealed chips that seem like they'll be able to beat the ARM licensees on the key metrics.
No one can quite understand why it's taken so long. "I think Intel is still suffering with the inability of this very fine company to enter a new major segment that changes the game," Magnus Hyde, former head of TSMC North America told me. "That's been a problem before Paul, been a problem during Paul, and will probably be a problem going forward. They have all the things they need on the paper: the know-how, the customers, the cash to take over whatever they need. But somehow a little piece is missing."
"This is a company with 100,000 employees with a 40-year legacy. They are unbelievably good at what they do. No one can touch them," said Rasgon, the analyst. "There is a certain degree of arrogance that goes align with that."
"As CEO, that's your job: steer [the ship]," he continued. "It doesn't necessary mean [Otellini had] a failure of vision, but he couldn't get the ship to turn."
Ruiz, who led AMD's last battle with Intel while he was CEO from 2002 to 2008, told me he thought Intel's mobile progress had been slowed by their concentration on his company. "The focus the company has had for the past three decades on squashing AMD caused them to lose sight of the important trends towards mobility and low power," he said. "They should have focused more on their customers and the future than on trying to outdo AMD."
Some people seem to think someone else could have done better. And it's nice to believe in the transformative leader. Call it the Fire-the-Coach Fallacy. Sometimes, installing a new leader of an organization leads to better performance. But far more often, as some simple Freakonomics blogpost would tell you, we overestimate the importance of changing the coach or the CEO. It's not that CEOs are not important, but the preexisting conditions within and surrounding a company are just more important.
Unlike a lot of leaders, Otellini seems aware of this fact. "Intel's culture is blessedly not the culture of a CEO, nor has it ever been," he told me. "It's the Intel culture."
Otellini, of course, knew the Intel culture well. It had formed the substrate of his entire career. Starting out in finance in 1974, he'd worked his way up the chain on the business side of the operation, eventually landing the key gig of managing Intel's IBM account in 1983. It was right before Intel abandoned the memory business. He'd worked closely with Andy Grove, watching how he processed information, managed, and made decisions. He'd spent two years in the executive suite with Craig Barrett, watching him steer Intel in the rocky days after the Internet bust.
The Intel culture has been remarkably successful, of course. But it has also shown a resistance to change. It has managed to successfully surf massive transitions like getting out of the memory business in 1985 to focus on microprocessors and retaining a leading position in the move from desktop processors to laptops, but the same focus and scale that make Intel so powerful also prevent it from changing tacks quickly. If you've got 4,000 PhDs and 96,000 other people working for you, it's hard to turn on a dime.
Perhaps, though, the transformation that Otellini began in 2005 will finally be complete during Brian Krzanich's tenure. Intel's technical lead, perfectionism, and scale will create amazing chips at prices that cause phone and tablet makers to give up their commitments to the ARM ecosystem.
"They already have products in the marketplace that are competitive and I would not be surprised if they had best-in-class products in a few years," Rasgon said. "What they are doing on the [manufacturing] process has really driven that."
Otellini sees an analogy to the current situation in Intel's performance with Centrino laptop chips. "Intel made the big bet. [Chief Product Officer] Dadi [Perlmutter] and I made the big bet in 2001 to bet on mobile. This was when the desktop was 80 percent of all PCs, maybe 90 percent, and unabated growth and notebooks were luggables," Otellini said. "And we thought that there was an argument about what a computer could be and that led to what would become Centrino."
Centrino chips won over Apple's Steve Jobs because the silicon was so good they could not be ignored. "The head-to-head of comparison of an Intel based notebook and an Apple notebook were night and day in terms of performance, battery life, etc," he said. "That's what got their attention."
And if Apple -- so notoriously anti-Intel that a 1996 Mac commercial showed a burning Intel mascot -- could come to love Intel processors, couldn't all the current ARM licensees see the blue Intel light?A Battle of Innovation Cultures: The Lab Vs. The Ecosystem
Silicon Valley has been, rightly or wrongly, synonymous with innovation for four decades. Now, it's as much a notion as a place. When Paul Otellini joined Intel in 1974, a year of bloodletting at the company that also saw two of its future CEOs hired (Otellini and his predecessor Craig Barrett), the peninsula south of San Francisco and the Santa Clara Valley had merged in the American mind into the crucible for the future. Though Intel would only make $20 million that year, it was clear that these chips, and their tendency to get cheaper so quickly, were a new force unto the world. The whole enterprise was shaped by individual humans, structured by capitalism, and aided by Cold War R&D money, but the effects of all this memory and computation, its exponentiality, were hard to predict. A story led the New York Times business section a couple years later with the banner headline, "Revolution in Silicon Valley." The subheadline read, "'The basic thing that drives technology is the desire to make money,' says one executive. Now, where can they use the technology?"
Think of that as a kind of ur-mainstream media Silicon Valley story. It's got all the elements: an early reference to the orchards that used to exist, "low-slung" buildings as the unlikely seat of revolution, hot consumer products, hypercompetitive industries, massive innovation, great men, something like a formulation of Moore's Law, and the exceptionalist sense that this could only happen in this one place in California.
There are two conflicting narratives about all this Silicon Valley innovation. On the one hand, there is the notion that Silicon Valley is an ecosystem of entrepreneurs and inventors, financiers and researchers. Companies can break up and reassemble. Spinoffs can pop out of larger corporations. Startups can disrupt whole industries. Competitors can cooperate and then compete and then cooperate. And when you add up all these risk-taking, failure-forgiving people, the sum is greater than the parts. Fundamental to this notion is the idea that innovation happens best in networks of firms and individuals, in an ecosystem (a word that itself gained credence thanks, in part, to Stanford ecologist Paul Ehrlich in the late 1960s).
On the other hand, we have Intel. Intel structured and thought of itself like a research laboratory, according to long-time Silicon Valley journalist Michael S. Malone, in his 1985 book, The Big Score. "The image of a giant research team is important to understanding the corporate philosophy Intel developed for itself," Malone wrote. "On a research team, everybody is an equal, from the project director right down to the person who cleans the floors: each contributes his or her expertise toward achieving the final goal of a finished successful product."
Malone went on that the culture of Intel was not that of a bunch of loosey-goosey risk takers, but true believers, almost robotic in their dedication to Intel's goals. "Intel was in many ways a camp for bright young people with unlimited energy and limited perspective," he continued. "That's one of the reasons Intel recruited most of its new hires right out of college: they didn't want the kids polluted by corporate life... There was also the belief, the infinite, heartrending belief most often found in young people, that the organization to which they've attached themselves is the greatest of its kind in the world; the conviction they are part of a team of like-minded souls pushing back the powers of darkness in the name of all mankind."
This is a very different vision of innovation. This is an army of people tightly coordinated, highly organized, and hardened by faith. It was this side that competitors and suppliers have long encountered and complained about (sometimes appealing to the regulatory authorities).
"They are tough to deal with. I know some of the executives privately and they say, 'We're not really nice people to deal with.' They admit it. And it's true," Magnus Hyde, former head of Taiwan Semiconductor North America, told me. "They are really nasty when you get into negotiations."
And as for this whole "failure's cool!" mantra that seems to re-echo around Silicon Valley, Intel's Andy Grove enshrined what he called "creative confrontation," which encouraged and rewarded people to get after each other for flagging performance or mistakes.
Taken as a whole, Intel is a self-contained research, development, and deployment machine. That is not an ecosystem. Though obviously Intel has many partners with whom it makes money and has good relationships, on the leading edge of innovation, Intel goes it alone.
Time and again, this strategy has worked as almost all of their competitors have fallen by the wayside. Intel is the only chip company in the world that's been able to hang on to its vertically integrated business model. "They have these methods, these Intel methods, that have worked very well for them," Hyde said.
The way Otellini vanquished AMD is a classic example of the Intel way. AMD had always played Brooklyn to Intel's Manhattan. Otellini himself had offers from both companies coming out of business school, and the competition remained fierce all the way until he took the reins. AMD was resurgent then. They had beat Intel to market with excellent 64-bit chips that were perceived to provide more performance for less money than Intel's processors. AMD's stock was on a climb that would take it to dizzying heights. By the end of 2008, Intel had destroyed AMD's momentum and sent the company into a tailspin. Finally, in early 2009, AMD spun out its fabrication facilities, exiting the chipmaking game. It was TKO in the longest-running bout in Silicon Valley. "They buried AMD," Rasgon put it bluntly.
Of course, there were several ugly court battles about Intel's hardball tactics in keeping AMD out of more machines. Intel eventually paid AMD $1.25 billion to settle the case in late 2009.
What's clear is that when Intel has a single competitor to focus on, they are hard to beat. "The thing about Intel is that we always come back," Otellini told me. "We put resources on it. We get focused. And watch out." They outinnovate, outmanufacture, and outcompete any company that comes into their targets.
Which brings us back to the question of mobile, the space that has eluded Intel for a decade. What's fascinating is that it's a battle between Intel and a swarm of companies licensing chip designs from a relatively small IP company, ARM. Intel has bulk and strength, but they've come up against that other model of innovation: the ecosystem. It's two ideas about how Silicon Valley works locked in combat. If you're the swarm, with Qualcomm as the queen bee, the question is: How do you hold the coalition together?
If you're Intel, which fly do you fire the shotgun at? Not ARM, that's for sure.
"ARM is an architecture. It's a licensing company," Otellini said. "If I wanted to compete with ARM, I'd say let's license Intel architecture out to anyone that wants it and have at it and we'll make our money on royalties. And we'd be about a third the size of the company."
"It's important for me, as the CEO, that I tell our employees who it is that we have to compete with and who we're focused on, and I don't want them focused on ARM. I want them focused on Qualcomm or Nvidia or TI," he continued. "Or if someone like Apple is using ARM to build a phone chip, I want our guys focused on building the best chip for Apple, so they want to buy our stuff."
I asked ARM's Segars about what I'd heard from Otellini, namely that Intel would beat the individual members of his coalition because they make the best transistors, and that would ultimately carry the day.
"There is a long track record of Intel investing very heavily on the leading edge of technology and implementing innovations of process technologies ahead of everybody else. That is a statement of fact and nobody would dispute that," Segars responded. "The transistors are, of course, important. The way in which the transistors are used is very important and really what the explosion of the technology space over the last couple of decades has shown is that there is a need to innovate and you can't focus innovation in just one company. If all the world's chips came from one vendor, whether it's Intel or anybody else, naturally that's going to limit innovation because there are only so many people and there will be a philosophy that's followed."
But Otellini, or Krzanich, can't focus Intel on ARM's "intangible" rhetoric. The questions industry watchers should be asking, Otellini said, are these ones: "Do you think Intel can beat Qualcomm? Do you think Intel can beat Nvidia? Do you think Intel can compete with Samsung?"
The answer might be yes, Intel can compete with each one, but maybe not with them all.
Or, maybe, the great machine will dominate once again. That's how Stacey Rasgon, the analyst who's been watching Intel and its rival chipmakers for two decades, sees it: "If I'm looking out five, ten years, they could potentially bury everybody else."
By all accounts, Rayid Ghani's data work for President Obama's reelection campaign was brilliant and unprecedented. Ghani probably could have written a ticket to work at any company in the world, or simply collected speaking fees for a few years telling companies how to harness the power of data like the campaign did.
But instead, Ghani headed to the University of Chicago to bring sophisticated data analysis to difficult social problems. Working with Computation Institute and the Harris School of Public Policy, Ghani will serve as the chief data scientist for the Urban Center for Computation and Data.
Before the campaign, Ghani said that he found it difficult to use his data skills for social good. There were plenty of corporate jobs that wanted people who could do analytics, but not many non-profits. "The reason I got on the campaign is that I was trying to connect things I cared about with what I was good at. I wanted to use analytics and data for social problems," he said. "But when the campaign was done, I was back in the same place."
Crunching numbers at Google, Facebook, or in finance, data scientists feel they are doing something important because their analyses can change millions of peoples lives or send millions of dollars caroming through the markets. "Improving search results or optimizing ad clicks, they are not improving lives but they are still having an impact," Ghani said. But they're not doing much for the non-corporate world.
Ghani hopes that his work at the University of Chicago can give students a way out of the impact/do-gooding conundrum. Towards that end, he's running the Eric and Wendy Schmidt Data Science for Social Good Fellowship, which is pairing up 40 computer science fellows from around the country with non-profits who have difficult data problems.
What kinds of problems are we talking about? "One of the problems that we're looking at is college admissions," Ghani told me. "You've got students with very high potential who are at risk of not applying to college or who apply to much worse colleges than they could get into... You want to be able to look at students and see who is at risk of this behavior." Perhaps there are early indicators of this kind of behavior lurking within an otherwise excellent academic record. If the students can be identified, they can be channeled into programs that help them.
It may not be as exciting as the Obama campaign, but it might go a small way to solving the problem so memorably identified by early Facebook employee, Jeff Hammerbacher. "The best minds of my generation are thinking about how to make people click ads," he said. "That sucks."
You see, cats are sly and unpredictable. They can slip unnoticed from place to place, watching, listening,
wryly judging, surveilling.
So, why wouldn't the Central Intelligence Agency stick a microphone in a cat's ear and embed a radio transmitter in her body? Oh, but they would! In an excerpt from her new book published at Popular Science, Emily Anthes describes the mid-century attempt to create a feline operative, Operation Acoustic Kitty. (See the redacted CIA memorandum describing the activity, too.)
In an hour-long procedure, a veterinary surgeon transformed the furry feline into an elite spy, implanting a microphone in her ear canal and a small radio transmitter at the base of her skull, and weaving a thin wire antenna into her long gray-and-white fur. This was Operation Acoustic Kitty, a top-secret plan to turn a cat into a living, walking surveillance machine. The leaders of the project hoped that by training the feline to go sit near foreign officials, they could eavesdrop on private conversations.
The problem was that cats are not especially trainable--they don't have the same deep-seated desire to please a human master that dogs do--and the agency's robo-cat didn't seem terribly interested in national security. For its first official test, CIA staffers drove Acoustic Kitty to the park and tasked it with capturing the conversation of two men sitting on a bench. Instead, the cat wandered into the street, where it was promptly squashed by a taxi.
The project made even hardened operatives squeamish. In 2001, a former CIA agent gave The Telegraph newspaper some more details about the animal. "They slit the cat open, put batteries in him, wired him up. The tail was used as an antenna. They made a monstrosity," he said.
For Anthes, though, Acoustic Kitty is only one in a long-line of military cyborg creations, one that's been unrestrained by moral considerations. "We can make tiny flying cyborgs--and a whole lot more," she writes. "Engineers, geneticists, and neuroscientists are controlling animal minds in different ways and for different reasons, and their tools and techniques are becoming cheaper and easier for even us nonexperts to use. Before long, we may all be able to hijack animal bodies. The only question is whether we'll want to."
There is no shortage of articles about the brutal work that the world's poor do to supply the companies that make consumer products. Occasionally, a horrific tragedy like the factory fire and collapse in Bangladesh will stir sympathies, but the day-to-day toughness of scratching out a living on the margins of society is hard to understand by reading statistics or hearing a couple of anecdotes.
Take this example. Uzbekistan is the world's third-largest cotton exporter. Their cotton goes into shirts everywhere. And to pick this cotton, the country's government has pressed schoolchildren into labor, according to human rights groups. Depending on the time of year and age of a worker, a cotton picker could have a daily quota of 50 kilograms (110 pounds) of raw cotton.
What's that mean? Is it possible to simulate the drudgery of the work? The designers at GameTheNews tried and, at least partially, succeeded. They created a simple game. There are two buttons. Both say pick cotton. And as you do, a bit of cotton -- between one and two grams -- goes into your pack. You can press the buttons quickly, but there is a short pause as your hand reaches into the pack. The fastest strategy is to switch from left to right button as fast as possible.But once you've figured out the optimal strategy for speed, you realize: You will have to hit these buttons 30,000 times or something in order to fulfill your quota! It would take, the designers estimate, eight straight hours of hitting the buttons to "win" the game.
But there is no winning of course. No amount of speed or skill, no lifehacking or positive thinking could make the work more fun. It's just the 50 kilograms of cotton and the hours of work required to pick it for export.
Kara Swisher has penned the definitive piece about how Facebook bought Instagram in the latest issue of Vanity Fair. The story is packed with context, drama, and detail as Swisher shows again that she has the best play-by-play game in tech journalism.
Less obvious is how much personality she packs into the quotes in her story. She does remarkable stuf with the access she got to Twitter's Jack Dorsey, Facebook's Mark Zuckerberg, and (obviously) Instagram's Kevin Systrom,
So, here, I present you with Zuckerberg and Dorsey's quotes, as well a selection of Systrom's.
Kevin Systrom, Instagram co-founder
"That idea that you could get rich really quickly off of starting a start-up didn't really exist in Massachusetts, on the East Coast, during that time."
"I was like, Great, I missed the Twitter boat. I missed the Facebook boat."
"Instead of doing a check-in that had an optional photo, we thought, Why don't we do a photo that has an optional check-in?"
"I was naturally inclined to take pictures, because it was much more about tweaking variables than it was necessarily creating something with your hands."
"I said, 'Well, you know what he does to those photos, right?' She's like, 'No, he just takes good photos.' I'm like, 'No, no, he puts them through filter apps.' She's like, 'Well, you guys should probably have filters too, right, then?' I was like, 'Huh.'"
"I'm not sure what changed my mind, but he presented an entire plan of action, and it went from a $500 million valuation from Sequoia to a $1 billion [one from Facebook]. Obviously, the equation was completely different."
"I think everyone thinks that the acquisition was made in a dark room with Trent Reznor music playing. Do you know what I mean? Like there was some dramatic thing. And it turns out that some of the biggest decisions get made relatively quickly, without much fanfare."
"It's wrong not to be thankful for what's happened."
Jack Dorsey, Twitter CEO
"From the start, Instagram was a simple application and a joy to use. I was blown away by how much detail they put into the experience. It reminded me about how much Kevin talked about photos [when he worked at Odeo]. There was an obvious obsession there, but it had never been put into practice until then."
"I found out about the deal when I got to work and one of my employees told me about it, after reading it online I got a notice later that day since I was an investor. So I was heartbroken, since I did not hear from Kevin at all. We exchanged e-mails once or twice, and I have seen him at parties. But we have not really talked at all since then, and that's sad."
Mark Zuckerberg, Facebook CEO
"Kevin would call me and I would call him."
"They got a lot of traffic from Facebook. And it occurred to me we could be one company."
"A gesture does not equal an offer, because every tech company is always talking to every other. So, I wanted to be very clear that we were very serious."
"This never had the feeling of negotiation, because we kind of wanted to work together."
"Most of the other things we bought were talent acquisitions, but in this case we wanted to keep what it was and build that out."
I was running on a hot day in Denver, bikes whizzing past me ("On your left!") thinking about online media when I hit on an analogy that might explain why I love blogging so much and why I find the newspaper apparatus so interesting.
Digital media is like a fixie*. You've probably seen fixed-gear bikes if you live in a big urban area or have visited Portland. Unlike the 10-speeds we grew up with or a 24-speed mountain bike, fixies are set to a single gearing. That is to say, the relationship between a pedal rotation and a wheel rotation is fixed. (Perhaps at a ratio of 2.5 or 3: one full pedal rotation moves the bike forward 2.5 or 3 times the circumference of the wheel.)
Now, gears are
magic kinetic proof of the wonderful laws of physics. And they come in handy: by changing that ratio, you can make peddling easier going up a hill or switch to a high gear ratio (say 4:1), which allows you to go fast as each pedal rotation drives the back farther. In general, the gears help you match the work your legs are doing to the terrain.
That gearing does have some costs. Namely, the gears are expensive, heavy, and require more maintenance. And of course, the whole riding process is more complicated because you have to switch gears.
Fixies, by contrast, are (generally speaking) lighter and have fewer parts than bikes with more gears. Less maintenance is required. Some even have no brakes (although most fixies you see in a city do). Plus, there's still a whiff of countercultural street cred thanks to these bikes' association with bike messengers and overall badassness.
To spell out the analogy: pedaling is the reportorial/analytical work of journalism and the distance the bike moves is the amount of output (posts, etc) that comes from that effort.
At most online media outfits I know, people just pedal as fast as they can with posts spinning out of that work in very simple ways. What you report and analyze, you write. It's hard work, but conceptually easy. The whole process of publishing is stripped down to the essentials. The apparatus doesn't have a lot of costs.
OK, now for my key point: editors are gears.
Without being too mechanistic about the analogy: editors can change the way effort is distributed. They can gear down and have reporters pedal like crazy without posting a ton of stuff. They can throw a bunch of effort at a story so that it can move really quickly. This was really clear during the Boston Marathon bombing coverage. A machine like the Boston Globe can do things that are amazing, in large part because they have a bunch of editors who can direct things. That task is a lot harder in very flat online media organizations that don't have much editorial gearing.
This is not an argument about whether online media is better or worse than previous ways of publishing news and analysis. But it is different, and I wonder if online media companies won't start adding gears, taking on complexity in order to give themselves more flexibility in different reporting situations.
* Just to point out the obvious joke that some will be tempted to make: "Oh, it's like a fixie? You mean a BS hipster trend that's going to disappear in a couple of years." Yes, I get that as a joke. No, that's not what I mean.
The first time I heard the lie, I was in fifth grade. Mr. Ward took me aside (or maybe he told the whole class, it was a long time ago) to tell me about the wonders of Dvorak, a different keyboard layout that was scientifically designed to be more efficient than the standard layout. That layout was called QWERTY, he explained, and it had been created to slow typists down. You see, in the olden days, mechanical typewriters could jam if people hit the keys too quickly, so they had to put the common letters far apart from each other. The modern keyboard, I was told, was a holdover of the mechanical age.
Since then, I've heard this story repeated a thousand times. So many times, I had assumed it was true. But Jimmy Stamp over at Smithsonian points to evidence released by Japanese researchers that, in fact, the story is bunk. The QWERTY keyboard did not spring fully formed from Christopher Sholes, the first person to file a typewriter patent with the layout. Rather, it formed over time as telegraph operators used the machines to transcribe Morse code. The layout changed often from the early alphabetical arrangement, before the final configuration came into being.
The researchers tracked the evolution of the typewriter keyboard alongside a record of its early professional users. They conclude that the mechanics of the typewriter did not influence the keyboard design. Rather, the QWERTY system emerged as a result of how the first typewriters were being used. Early adopters and beta-testers included telegraph operators who needed to quickly transcribe messages. However, the operators found the alphabetical arrangement to be confusing and inefficient for translating morse code. The Kyoto paper suggests that the typewriter keyboard evolved over several years as a direct result of input provided by these telegraph operators.
That is to say, the lesson of the QWERTY story remains the resilience of a design created for an outmoded technology's dictates. QWERTY is still an example of technological momentum. But the development of the design wasn't accidental or silly: it was complex, evolutionary, and quite sensible for Morse operators.
Keyboard configurations are newly important as we think about how we should type on tablets and other devices. The calling card of the personal computer was the keyboard, and now, we are carrying around pieces of glass on which we simulate the old QWERTY design. Are we going to keep that layout going? Perhaps QWERTY will always be good enough. But if not, how might a new design develop?
One day in March, I was sitting across from Facebook's design director, Kate Aronowitz, at 1 Hacker Way in Menlo Park when she told me, "It takes a lot of work to create the perfect empty vessel." In this near koan, lay a design philosophy and an explanation. Facebook as a series of beautiful empty vessels into which users pour their text and photographs, hearts and minds.
Over the next few weeks, I kept thinking about this near koan: How do you design an empty vessel? Is there any such thing? So I went back to Facebook and asked some more questions.
"We tend to think of everything in terms of social design. The box, for us, is a vehicle to allow one person to communicate with another. It's entirely about who's on the other end of that box, not really the box itself," Facebook designer Russ Maschmeyer told me in a different conference room. "Our overarching design goal is to make that box as invisible as possible, so that your content is the thing that's most important."
Facebook is the New York Yankees of design teams.
That is not as easy or as simple as it sounds. The seemingly small decisions about how boxes should look, how the text of the site reads, and what actions users can take are defining what Facebook is. Like the government backers of the institutionalization of the Postal Service, the monopolistic heads of Ma Bell, the nerds who developed email protocols, or the suits who've directed the deployment of the nation's cell phone networks, Facebook's designers are structuring the social experiences of vast numbers of people. Who they are and how they think will change the way a billion people experience the world, not to mention, what Facebook is.
The company clearly sees designers as a key to its future. Just look at how many they've snapped up. The spree began in June of 2011, when the company picked up Sofa, an Amsterdam-based design studio. Then, in August, they bought Push Pop press, which was seen as an acquihire of designers Kimon Tsinteris and Mike Matas, who designed several of key pieces of the iPhone interface. The next month, Maschmeyer joined up. In December, Facebook bought the check-in service Gowalla, largely for its design team. 2012's haul got started with Elizabeth Windram, who helped design Google Search and was lead designer for Google Maps. Rdio's head of design, Wilson Miner, was the next designer to fall in May of 2012, followed closely by the acquisition of the design research firm, Bolt Peters. In July, Justin Stahl, creator of The Font Game, came on board. In September, design researcher Marco De Sa came over from Yahoo. And finally, last month, Facebook bought Hot Studio, a design agency that had been independent for more than 15 years.
There has been some turnover -- Nichols Felton left this month after two years on duty, and Ben Blumenfeld recently left to work on his design-oriented angel fund -- but on the whole, Facebook's been stacking up design talent by any means necessary. A question posted on the Q&A site Quora even asked, "Is Facebook's stockpiling of design talent bad for the industry as a whole?" Designer and entrepreneur Zach Klein, of Vimeo fame, among other things, put it like this: "Facebook is the New York Yankees of design teams." And that was before Maschmeyer, Miner, Bolt Peters, and Hot Studio.
Aronowitz is the woman behind the hiring-acquiring binge. Elegant, smart, and self-possessed, I think of Aronowitz as the Theo Epstein (former Red Sox GM) of the design world: She seems to be able to create just the right circumstances to bring talent to Menlo Park. Her role is not like Apple's Jony Ive, though, pushing a particular design philosophy throughout the company. She's supposed to build the design team, and release them into the company to work side-by-side with the company's engineers.
Facebook wants to invite interaction in the most minimal way possible.
As all these designers vanish into the bowels of the company, so, too, does their work. Facebook wants to create design that both allows and guides behavior without calling attention to itself. And what works in the Deep South must also work in southern India and South America. It must work for 16-year-olds and 86-year-olds.
In practical, want-to-send-a-message-to-my-sister terms, this is a good thing. As Maschmeyer put it, Facebook wants to "invite interaction in the most minimal way possible." That means killing as much "chrome" as possible. Chrome is all the little stuff that makes up what you see as the user interface. Chrome tells you what to do. User behavior researcher Jakob Nielsen came up with this general definition:
Chrome is the visual design elements that give users information about or commands to operate on the screen's content (as opposed to being part of that content). These design elements are provided by the underlying system -- whether it be an operating system, a website, or an application -- and surround the user's data.
Maschmeyer gave me a great example of this, which you will no doubt recognize from using the web. "Some sites, when you upload a photo, will take the photo and slap fake Polaroid borders on it and give it a drop shadow and put it in this fake stack with other photos," he said. "Those are all examples of chrome and our goal is to remove as many of those pixels as possible, so your content takes up as many pixels as it can."
In some cases, as with Maschmeyer's project, Graph Search, Facebook is willing to force its users through a quick demonstration -- known as a new user experience, or NUX -- so that they can keep the permanent design even simpler. Even though the grammar required by Graph Search is only sort of intuitive, a couple of simple examples taught most people how to use it. That let Maschmeyer reduce all the permanent user instructions for a complex and powerful search tool into exactly one line of text on a blue background:
"Some of the best architecture isn't about, 'Look at this amazing beautiful building I made," Maschmeyer told me, "But look at the amazing activities that I'm allowing people to undertake within this space. And designing the space to facilitate in the best possible way those activities. I think we take the exact same approach with our 'boxes.'"
He continued, "We want to create the space in which people can communicate the emotions, ideas, thoughts, wonderful things they find, beautiful images that they see in the most efficient and clean way possible," he concluded.
"I think we're lucky at Facebook, people share a lot of really beautiful content. We don't really need to add any more to that," another designer Vivian Wang told me.
Similar sentiments have been expressed all the way up the chain and into the mouth of Mark Zuckerberg himself. And I think Facebook has created the most efficient engine for sharing, archiving, and monetizing text and pictures that the world has ever known.
And that goes for private as well as public communications. A large percentage of the interactions on Facebook happen privately, outside all the mechanisms Facebook has for rewarding and encouraging sharing. Peter Deng, who manages Facebook's communications platforms, told me that people spend a lot of time communicating with close friends on Facebook. In fact, for any given user, 80 percent of the messages that he or she sends, goes to a group of about four people.
It is just a fact that there is no better designed way online to talk with friends (who are on Facebook, of course). The grumbling you hear about Facebook -- some on these very pages -- is a testament to how powerful the system is, how well it works, and the level of usage it inspires.
Disappearing from the user's view doesn't just happen through the graphical user interface elements. The text also has to communicate without drawing attention to itself. Content strategist Alicia Dougherty-Wold is responsible for the words that you see on Facebook. "The content strategy team is a really important part of that holistic experience disappearing. In all of the prompts on the site like, 'What's on your mind?' we are using a voice and tone is deliberately very conversational to make you feel at ease," Dougherty-Wold said. "We're trying to set that feeling that you are in a comfortable room on a comfortable sofa in a comfortable place to talk to the people you care about the most. And we're trying to do that very subtly with language."
They even take care not to create any emotional friction as you enter your life details into Facebook. One fantastic example that Dougherty-Wold gave me was adding a "life event" on Timeline. "There's a menu of those events and a typical menu would list the options alphabetically," she said, "but if we did, you'd have divorce sitting on top of engagement. The content strategist who worked on that menu had a tremendous amount of empathy." The list was reordered to follow the arc of a relationship. "Just by not making you think about divorce at the same time that you're thinking about engagement," she concluded, "we're getting out of your way."
In fact, they have three rules for disappearing from sight: "Keep it simple, get to the point, and talk like a human." These are not too far away from the rules we try to use here on The Atlantic Tech.
"I think if we're doing our job, you're not feeling like it's mediated."
But one thing kept sticking for me as I thought about how remarkably and cleverly constructed the Facebook world really is: While the interface and words might not attract your attention, they are still structuring your behavior. And you'll probably never even notice. It's kinda nice that Facebook doesn't guide you to think about divorce while you're entering in your engagement. But that decision is still a reflection of an ethos, and that's something the company doesn't seem to want to own.
This crystallized for me during an exchange I had with Dougherty-Wold towards the end of our conversation. After she told me that Facebook's writers try to talk like humans, I replied "But it is fundamentally still the voice of the borg. It's not like [users] are talking to a human. It is still a system that they are interacting with and not another person."
"They are talking to each other, right?" she countered. "If you're using Facebook, you're telling your story to whoever you choose to be friends with."
"But it's still mediated through the structure and you're the voice of that layer of mediation," I said
"I think if we're doing our job, you're not feeling like it's mediated," she said
"But it is," I insisted.
"When you call your mom on the phone, are you thinking, 'I am talking on a device'?"
"That's an interesting question," I said. "I would say yes. But I can understand why people say no."
"I would say, I'm talking to my mom. The only time I would say I'm talking to a device is when my cell carrier drops."
At any given moment, yes, it is probably more important to you that you're talking to your mother than talking on a phone. But what about the system that allows that voice to come through that particular handset? I see Doughtery-Wold's point, but I wonder, what responsibility do the system makers have in helping us think about the system?
"You don't improve the experience of nailing things by pretending the hammer doesn't exist."
Can we wave away the structure of our tools so easily? And are we comfortable with doing so around the highway system or the way food is produced in this country or gun ownership? Are all technologies neutral? ("Facebook doesn't friend request, people do.")
When it comes to the system that Doughtery-Wold uses to talk with her mother, cell phone companies' unreliable services unintentionally highlight their weaknesses -- and perhaps the weaknesses of the way spectrum is allocated in the United States, which might motivate people to some kind of political or consumer action. Facebook does the same when it has privacy snafus or switches up the way the service works. Wait, there was a structure all along?
Designer Dylan Fareed, former director of technology at 20x200, disputed the idea that Facebook's mediumness could simply disappear.
"Pure conduits don't actually exist. Ideas communicated over Facebook/Twitter/SMS/emoji/passenger pigeon/smoke signal are as much about the medium shaping the idea as they are actually about the idea itself," Fareed told me via email. "And without any doubt a primary purpose for interface is to make legible what is otherwise happening invisibly. You don't improve the experience of nailing things by pretending the hammer doesn't exist."
And Facebook, Fareed argued, being as integrated into a billion people's lives as it is, "has a responsibility to be more communicative about what happens under the surface when we interact with their services."
They've built an enticing chair, and they let me sit in it for free, but they're selling my farts to the highest bidder.
After all, the UX researcher Nielsen had a simple argument for chrome, the very thing Facebook is seeking to minimize. "Chrome empowers users," he wrote, "by providing a steady set of commands and options that are always visible." Less Chrome means less options for users. Less chrome means being funneled down paths without even knowing that others might exist. Of course, they might be the very paths that you would be most likely to choose -- in fact, they almost certainly are.
Last year, Facebook compared itself to a chair, the consummate tool, in an advertisement that's been viewed more than 2 million times on YouTube.
Chairs. Chairs are made so that people can sit down and take a break. Anyone can sit on a chair. And if the chair is large enough, they can sit down together and tell jokes and make up stories or just listen. Chairs are for people. And that is why chairs are like Facebook. Doorbells. Airplanes. Bridges. These are things people use to get together so they can open up and connect about ideas and music and other things people share.
But is this an apt comparison? If Facebook was a chair, what kind of chair would it be? Mike Without any prompting or sending him the advertisement, Mike Monteiro of Mule Design had a harsher take on Facebook's status as a designed object.
A well-designed chair not only feels good to sit in, it also entices your ass towards it. So this is nothing new to Facebook. Where it gets interesting to me is when you start asking to what end you are designing. The big why. In the chair example, the relationship is clear. If I can design a chair that entices your ass, then you will buy it. I've traded money for ass happiness (and back happiness, but that's less sexy). But it's clear who the vendor and who the customer is in that case.
Where I have issues with Facebook is that they're dishonest about who the customer is. They've built an enticing chair, and they let me sit in it for free, but they're selling my farts to the highest bidder.
Monteiro admits that 90 percent of the web works the same way. "Facebook bothers me more than most because they're both so blatant and so good at it," he said.
While these things might seem like a problem solely for users, I think they're a problem for Facebook, too. Facebook has relentlessly focused on what their users want, according to the metrics they can capture. The company itself, its goals and aspirations, profit and growth targets, are subsumed into the quest to put the user first. And yet, Facebook is a company. They are a mediating force. They are not a chair or a doorbell or a bridge, even if that fiction creates the most convenient experience for the company and its users.
But there's something that happens when the reality shows through. People get so used to Facebook disappearing that when the company or the technology inevitable rears its head, they are appalled to find that they've been communicating on a tightly managed, for-profit system all along. Which is why, oddly, it might help Facebook to design in more signs of mediation, a little more chrome, a little less perfection.
Take a look at what happened with Facebook Home. It feels simple, but underneath the hood, it's all data-driven to be a great phone experience. Facebook knows that people look at their lock screens much more often than their phones. Facebook knows that people open up Facebook more often than anything else on their phones. And Facebook knows that picture content is the most engaging they have. Ergo, their differentiating experience is to show you Facebook photo content on the lock screen.
This is exactly what people should want, or rather, do on their phones. And yet, there are overwhelmingly negative reviews for Home on the Android Store, where the opperating system has an average rating of 2.2 out of 5, with more than half the current ratings coming in at 1, the lowest score possible. User after user says things to the effect of: "Great, but now I can't use the rest of my phone except for Facebook."
Even if what they want to do most is use Facebook and this makes it better and easier, they don't want their phones' possibilities foreclosed. When Facebook's power -- as reflected in its designers' ability to control your experience -- runs up against your own perceived power, what do you do? What happens when you notice the opportunities, limitations, and obligations that are packed down into this term, user?
It's a genuine dilemma that users don't have the collective power to solve and that Facebook doesn't have the incentive to address. Warily, warily, we roll along.
Christina Agapakis is a rising star among the new generation of biology researchers. Trained in the science of custom-building organisms known as synthetic biology, the UCLA researcher likes to think about the way her field intersects with culture and industry more broadly.
Case in point: Through a program of the BioBricks Foundation, she worked with artist Sissel Tolaas to create cheeses cultured with the microbes that help produce our body odor. The project highlights the meaning that humans assign to the productions of the invisible world of bacteria. And Agapakis wants us to rethink our relationships with the microbial communities that live in and around us.
"Re-contextualizing these ostensibly 'bad' smells, we saw that when the odor is in cheese it smells good and it's a sign of culture and good taste. But the same smell on a body is disgusting," she told me an interview for our most recent issue. "By making cheese using bacteria from the body, we're showing that we should be able to think about the microbes in our lives in different ways."
Since the beginning of the 20th century, we've learned so much about the machinery that powers life, but the larger societal and political issues that the biosciences raise receive far less attention than technological developments like smartphones or social networks. Biology is so complex that we need people like Agapakis who provide pathways towards a better understanding of how we interact with all the life we can't see.
In this extended remix of the print Q&A, we talk about the long-term potential of biology, that cheese project, and the potential to engineer the microbial ecosystems of our digestive tracts.
People have big expectations for biology in the 21st century. Many say that biotech will be as big as information technology was in recent decades. Is that true?
People want synthetic biology and biotechnology to be the next industrial revolution. Looking back, people have tended over time to imagine bodies functioning in ways that were analogous to the dominant technological paradigm of their day, whether that was steam engines or computers. I hope that soon biology will be the technology we judge things by. Maybe we're going to see industry and computational stuff start to look more like biology, rather than biology looking more like industry and computation.
What would it mean to have industry look like biology?
Well, people are trying to push synthetic biology [in the direction of] the chemical industry--to replace any petrochemical with a biological process. You could have a vat of bacteria that's going to make the chemicals that you want. That model can be good, but it's limited. It isn't trying to rethink the way we use chemicals and do industry. Daisy Ginsberg, an artist and a writer and designer, says, "It's a disruptive technology that doesn't really disrupt anything." If we still have gasoline, just made of bacteria in a vat, that may not be the right vision for the future.
People talk about creating standard DNA "parts," called BioBricks. What are those?
The idea behind BioBrick parts is that you can have a collection of pieces of DNA that have specific useful functions---off-the-shelf DNA parts. You are able to say, "Okay, I need a part that is fluorescent," or "I need a part that will activate in response to this chemical." Then you can mix and match: you put them both in a bacterium, and then you have fluorescence in response to some chemical--so we can have this kind of RadioShack.
It seems like the human body is getting more attention as an ecosystem of microbes and human cells working together. You explored this in a fascinating way by making cheese with human skin bacteria, right?
I was getting really into microbial ecology when I started a design fellowship with an arts and science group, Synthetic Aesthetics. I was moving away from the BioBrick model and into mixing and matching of whole cells. I was paired with Sissel Tolaas, who is an odor researcher. She calls herself a professional provocateur-- she lives between a lot of different fields, from perfumery and odor science to in-your-face art projects. She'll do things like paint people's body odors on walls in galleries.
It's not connected to the armpit, it's not as gross, but it is kind of gross. So why is it gross? She says things like, "Nothing stinks, only thinking makes it so." I was really interested in saying, "OK, where does body odor come from?" It's from this relationship between the bacteria that live on our skin and our own metabolism. We Googled "body odor," and we kept finding the molecule responsible for body odor was Isovaleric Acid--that's a really sweaty smell. Then we looked at some of the microbes responsible [for producing Isovaleric Acid] and we found propioni bacteria. When you just Google "Isovaleric Acid Propionibacterium," the whole first page of Google is about Swiss cheese.
This is still gross. But go on.
Re-contextualizing these ostensibly "bad" smells, we saw that when the odor is in cheese it smells good and it's a sign of culture and good taste. But the same smell on a body is disgusting. By making cheese using bacteria from the body, we're showing that we should be able to think about the microbes in our lives in different ways.
To what extent could we actively engineer our own microbial ecosystem in the future?
We can influence it--we can change the diversity in our gut, and that can influence health. There's the fecal-transplant example: Sometimes even antibiotics can't clear up serious digestive infections, and you can't repopulate the gut with enough good bacteria to get rid of the bad ones. But if you transplant the microbial community from a healthy gut into the person who has this infection, the healthy bacteria will push out the infectious bacteria. The challenge is, you can't say "You need this many of this and this many of this, and it's going to stay like that forever." It's more a matter of setting the right initial conditions.
There seems to be a tension between the complexity of life, which only gets more intricate the closer you look, and the speed of improvement in the DNA-sequencing technologies that allow us to see that intricacy. The more we learn about the building blocks of life, the more we realize just how much we still don't understand. Which will win out in the short term--the sense that we know more than ever, or the sense that life is even more mysterious than we'd grasped?
It's not really a matter of "winning." Tools that read and write DNA help us understand that complexity, but they're not enough. Sequencing is not going to tell you how genes are activated, how proteins interact with each other, how the cell inter-acts with its environment and with other cells. We're seeing, in the explosion of other kinds of "-omes" [for example, genomes, proteomes, metabolomes], a complexity that will require more than DNA sequencing to decipher.
The price of DNA synthesis is falling, but the overall price of synthetic-biology projects isn't going down at the same pace, because there is a lot more to the design, construction, and testing of synthetic systems. As Stanford's Drew Endy likes to say, "Just because we can write DNA doesn't mean we know what to say." An artful biological design is an incredibly complex endeavor, not just because of the complexity inside the cell. We also have to think about how applications will be marketed, regulated, and patented; how they will interact with the environment; and many other things that we won't learn from just the sequence--if at all.
The Hubble Space Telescope is aging. But there was a time when it was merely a twinkle in some astronomer's eye.
In fact, we know exactly who that astronomer was, and when he first told the world about the twinkle.
Lyman Spitzer, who was at Yale in 1946 (and later went to Princeton), published Appendix V of the Douglas Aircraft Company's Project RAND. The title of the work was, "Astronomical Advantages of an Extra-Terrestrial Observatory."
"While a more exhaustive analysis would alter some of the details of the present study," Spitzer wrote, "it would probably not change the chief conclusion -- that such a scientific tool, if practically feasible, could revolutionize￼ astronomical techniques and open up completely new vistas of astronomical research."
Spitzer's original paper was republished in The Astronomical Quarterly in 1990, and he added a postscript about the impact of his paper, which is actually a remarkable document itself. How does an idea written down somehow become a satellite flying around Earth?"Since this 1946 paper did not appear in the astronomical literature and was not generally distributed in reprint form, its direct influence on other astronomers must have been almost negligible," Spitzer writes. "Its chief effect was on me. My studies convinced me that a large space telescope would revolutionize astronomy and might well be launched in my lifetime."
From that point forward, he promoted the creation and launch of such a telescope. Over his years at Princeton, he worked out some of the technical problems and talked with other astronomers. Twenty years later, during the heat of the space race, the National Academy of Science asked Spitzer to head up a committee "on the Large Space Telescope" in 1966, when such a project began to look more feasible. They issued a report in 1969.
"During the work on that report, possible astronomical observing programs were discussed in detail with various groups of astronomers, who in the course of these discussions generally became enthusiastic supporters of such a large and powerful telescope," he recalled. "This support was a major element in Congressional approval of the large telescope project in 1977."
While people were aware of the limitations of earth-based telescopes, it was Spitzer who articulated and promoted the vision of the orbital observatory, NASA historian Gabriel Okolski agreed.
It took a good eight years to get funding, and another 13 to build and launch the Hubble. And, in some ways, that is the crowning glory of science: the timescales. Spitzer saw this thing through for 30 years to completion!
I don't think that kind of life approach comes naturally to people. It's a remarkable set of institutions that makes such long-term thinking possible.
This weekend, I was talking with a graduate student who works on stem cells in the heart. She said, "The problems I'm thinking about now are probably the problems I'll be thinking about when I die."
I have a completely unsubstantiated theory that my social media feeds have moods. Sometimes, everyone is happy and debates are civil. Other times, people are ragged and nasty. Whether it's national tragedies like the Marathon bombing and Texas fertilizer plant explosion or something simply controversial, like Lean In, my Twitter feed can suddenly become filled with snark, condescension, and anger.
In those times, I like to imagine the many people who do not follow the news of the day, who work outside, who wander lonely as a cloud, who live life at a slower pace, who tend to a flock like the shepherds of yore.
And wow, will you look at that, there is a real, actual shepherd on Twitter now: @herdyshepherd1 of the Lake District of England. And he tweets while he herds.
"Moving ewes with lambs off lambing fields to avoid mix ups."
Can lambing soon be over please.
"I reckon this lamb will be a cracker someday... 1 hour old and knows how to stand and show off. http://pic.twitter.com/b7sJuXH65y"
Etcetera. There are sundry pictures of lambs and sheep and sheep dogs. There's even a shepherd's crook in some photos, though, (TAKE THIS NOSTALGIA) it appears to be plastic.
What's so nice is that these tweets just pop up in my feed right alongside The Daily Outrage and breaking news alerts, reminding me, "Hey, other people are birthing lambs in a field. There is life outside the scrum."
Which is a good thing to remember. Also: newborn lambs!!!!
Hat tip: @FakeTV
I've said it before and I'll say it again: Rick and Megan Prelinger, the curators of the Prelinger Archives and Prelinger Library, are a national treasure. Following their own interests and supported by their talent and insatiable curiosity, they've assembled and digitized a vast collection of ephemeral films from the 20th century. And they've put them online for all of us to enjoy (with Creative Commons licensing, no less).
You can browse the collection by subject, sponsor, producer, title, or date, but the best way into this collection is to pick something at random. And for that, there's the "Surprise Me!" button. Unlike most "I'm feeling lucky" buttons, this one actually yields things worth watching.
In my first three spins, I got "Vision in the Forest," a cringeworthy short film starring country singer Vaughn Monroe sponsored by the National Forest Service. Then, I drew "To Market, To Market," about the birth of outdoor advertising in Chicago (along with a heavy dose of Cold War-era American capitalism promotion). And finally, my good fortune brought me a masterpiece, The Private Life of a Cat, a film Alexander Hammid made with his wife, Maya Deren. The film is intimate and lovely, with cat point-of-view shots and more depth and drama than seems possible. This is not a joke, and these filmmakers were not jokers. This was their second collaboration. Their first was the experimental filmmaking milestone, Meshes in the Afternoon.
That's the kind of range you find in the Prelinger collection and (therefore) the Linger app. This is a special corner of the Internet: a series of unintentional selfies of times and places that no longer exist.
In the middle of the last night's nearly unbelievable turn of events, for a few hours, hundreds of thousands of people received a message about the identity of the alleged Boston Marathon bombers that was painfully false. Word got out that the Boston Police Department scanner had declared the names of the two suspects.
But the names that went out over first social networks and then news blogs and websites were not Tamerlan and Dzhokhar Tsarnaev, which the Federal Bureau of Investigation released early this morning. Instead, two other people wholly unconnected to the case, became, for a while, two of America's most notorious alleged criminals.
This is the story, as best as I can puzzle it out, about how such bad information about this case became widely shared and accepted within the space of a couple of hours before NBC's Pete Williams' sources began telling the real story about the alleged bombers' identities.
The story begins with speculation on Twitter and Reddit that a missing Brown student, Sunil Tripathi, was one of the bombers. One person who went to high school with him thought she recognized him in the surveillance photographs. People compared photos they could find of him to the surveillance photos released by the FBI. It was a leading theory on the subreddit devoted to investigating the bombing that Tripathi was one of the terrorists responsible for the crime.
Meanwhile, at 2:14am Eastern, an official on the police scanner said, "Last name: Mulugeta, M-U-L-U-G-E-T-A, M as in Mike, Mulugeta." And thus was born the newest suspect in the case: Mike Mulugeta. It doesn't appear that Mulugeta, whoever he or she is, has a first name of Mike. And yet that name, "Mike Mulugeta," was about to become notorious.
But not at first.
A single tweet references Mulugeta at the time his name was said on the scanner. A Twitter user named Carcel Mousineau simply said, "Just read the name Mike Mulugeta on the scanner." It was retweeted exactly once. In the unofficial transcript of the scanner on Reddit, at least as it stands now, the reading of the name was recorded simply: "Police listed a name, unclear if related."
The next step in this information flow is the trickiest one. Here's what I know. At 2:42am, Greg Hughes, who had been following the Tripathi speculation, tweeted, "This is the Internet's test of 'be right, not first' with the reporting of this story. So far, people are doing a great job. #Watertown" Then, at 2:43am, he tweeted, "BPD has identified the names: Suspect 1: Mike Mulugeta. Suspect 2: Sunil Tripathi."
The only problem is that there is no mention of Sunil Tripathi in the audio preceding Hughes' tweet. I've listened to it a dozen times and there's nothing there even remotely resembling Tripathi's name. I've embedded the audio from 2:35 to 2:45 am for your own inspection. Multiple groups of people have been crowdsourcing logs of the police scanner chatter and none of them have found a reference to Tripathi, either. It's just not there.
Could some people have heard the name, but somehow that did not make it into the canonical recording at Broadcastify? I don't think one can rule anything out with this story, but it seems, at least, unlikely. (No other recordings have turned up from this time period in which Tripathi's name is mentioned.)
Yet the information was spreading like crazy. Seven minutes after Hughes' tweet, Kevin Michael (@KallMeG), a cameraman for the Hartford, Connecticut CBS affiliate, tweeted, "BPD scanner has identified the names : Suspect 1: Mike Mulugeta Suspect 2: Sunil Tripathi. #Boston #MIT." More media people started to pick things up around then, BuzzFeed's Andrew Kaczynski most quickly. His original tweet has since been deleted but retweets of it began before midnight and reached far and wide. Other media people including Digg's Ross Newman, Politico's Dylan Byers, and Newsweek's Brian Ries also tweeted about the scanner ID as 3am approached. Then, at exactly 3:00 Eastern*, @YourAnonNews, Anonymous' main Twitter account tweeted, "Police on scanner identify the names of #BostonMarathon suspects in gunfight, Suspect 1: Mike Mulugeta. Suspect 2: Sunil Tripathi."
The informational cascade was fully on. @YourAnonNews' tweet was retweeted more than 3,000 times. We don't know how far Hughes's, Kaczynski's, or Michael's tweets went because they've been deleted. Hundreds of references to their tweets remain on Twitter.
By this time, there was a full-on frenzy as thousand upon thousands of tweets poured out, many celebrating new media's victory in trouncing old media. It was all so shockingly new and the pitch was so high and it was so late at night on one of the craziest days in memory. That Redditors might have identified the bomber hours before anyone but law enforcement seemed like amazing redemption for people who'd supported Reddit's crowdsourcing efforts.
Hughes himself, the primary source of the information on Twitter, tweeted, "If Sunil Tripathi did indeed commit this #BostonBombing, Reddit has scored a significant, game-changing victory." And then later, he continued, "Journalism students take note: tonight, the best reporting was crowdsourced, digital and done by bystanders. #Watertown."
Within a few hours, however, NBC's Williams had confirmed with his sources that two Chechnyan brothers were the primary suspects in the case. Their names and stories came out quickly. This horrible deed of misidentification ended mercifully quickly. Apologies were made.
In the aftermath, I kept coming back to the moment when the fevered detective work of a subreddit broke out into a national story within minutes. Where had that authority come from? How had so many people bought in so fast?
The key moment is clearly at 2:43am when Hughes tweeted that the police scanner had mentioned these two names as suspects.
Nevermind that even if the scanner chatter had mentioned Tripathi and Mulugeta that would not have been enough to call them suspects. The supposed presence of these names on the lips of Boston police was convincing evidence that something was going on and that they were somehow linked to the crime.
Hughes, for his part, maintained (a bit cryptically) that he got the information when, "It was posted on the scanner and was transcribed on Reddit." I've reached out to him for comment, but haven't heard back. I also reached out to many of the other early tweeters of the scanner misinformation to ask if any heard Tripathi's name with their own ears. A few have maintained that they have. Others say that they listened to the feed for the entire time and never heard it, or were away from the feed during the time when the tweets broke out. As I said earlier, at least two group attempts to transcribe the available feeds did not find Tripathi's name, according to text that they sent me.
A few things are for sure: the scanner chatter never mentioned the two false suspects together. The scanner chatter never mentioned them as suspects, either. The scanner chatter recordings contain no record of any mention. And no one has been able to produce any recording of the scanner mentioning Tripathi.
This presents us with a strange mystery that I wish I could fully solve, but I can't.
Perhaps this is some kind of hoax perpetrated by some unknown group.
Or maybe people heard Tripathi's name, even though police never said it. Many of the people who thought they heard Tripathi's name already knew about the Reddit-centered suspicions about the student. Police had also said another name earlier in the evening and spelled it out. Perhaps they were primed to hear the name and among the static and unreliable connections to these scanners, they heard what they wanted to hear.
Maybe that's what I want to believe. Because otherwise, I just don't understand what happened last night. A piece of evidence that fit a narrative some people really wanted to believe was conjured into existence and there was no stopping its spread.
No one gets off easy here. This isn't a new media versus old media story. All kinds of people participated in last night's mistake. All I can say is thank you to NBC's Williams and the case's real investigators for coming forward so quickly with the information that cleared the false suspects' names.
* In the original version of this story, I mixed in one Pacific time with the Eastern ones. I apologize for the confusion.
Back in November of 2011, Errol Morris made a short documentary for the New York Times that is a profound meditation on the nature of evidence and the limits of our potential to understand the world from representations of it.
I watched it when it first came out and thought it was a masterpiece. Now, as hunting through photographs and videos for clues about the bombing has become a widespread phenomenon on Internet forums like Reddit and 4chan, not to mention weirder places like Infowars, I find myself returning again and again to this documentary.
In it, Morris interviews Josiah "Tink" Thompson, who wrote Six Seconds in Dallas, the book about the Zapruder film, a key piece of evidence in the Kennedy assassination. Thompson tells the story of The Umbrella Man, a bystander at just the location where the bullets started to hit the Kennedy motorcade. I've transcribed the entire film here for ease of skimming, but it's best to watch it at The New York Times' website. (It can't be embedded.)
In December 1967, John Updike was writing Talk of the Town for The New Yorker and he spent most of that Talk of the Town column talking about The Umbrella Man. He said that his learning of the existence of the umbrella man made him speculate that in historical research there may be a dimension similar to the quantum dimension in physical reality. If you put any event under a microscope, you will find a whole dimension of completely weird, incredible things going on. It's as if there is the macro-level of historical research, where things obey natural laws and the usual things happen and the unusual things don't happen. And then there is this other level where everything is really weird.
On November 22, it rained the night before, but everything cleared by about 9 or 9:30 in the morning, so if you were looking at various photographs of the motorcade route and the crowds gathered there, you will haven noticed nobody is wearing a raincoat. Nobody has an open umbrella. Why? Because it is a beautiful day.
EM: It is a beautiful day in the neighborhood.
It's a beautiful day in the neighhborhood. And then I noticed, in all of Dallas, there appears to be exactly one person standing under an open black umbrella. And that person is standing where the shots began to rain into the limousine. Let us call him, The Umbrella Man.
EM: Did you name the Umbrella Man?
Yes. You can see him in certain frames from the Zapruder film standing right there by the Stemmons Freeway sign. There are other still photographs taken from other locations in Dealey Plaza, which show the whole man standing under an open black umbrella. The only person under any umbrella in all of Dallas standing right at the location where all the shots come into the limousine. Can anyone come up with a non-sinister explanation for this? Hmm?
So, I published this in 6 Seconds but didn't speculate about what it meant or get into any of the conspiracy theories. Because everybody else got into the conspiracy theories. There was one wing nut who published a book with a diagram of the umbrella. The umbrella was rigged so that there was an aiming device and there was a rocket tube that you can fire a flechette directly into Kenedy's throat.
EM: The Umbrella Man is the real assassin.
That was the idea. That was the source of the hole in the throat, folks, right?
Well, I asked that The Umbrella Man to come forward and explain this. So he did.
He came forward and he went to Washington with his umbrella and he testified in 1978 before the House Select Committee on Assassinations. He explained then why he had opened the umbrella and was standing there that day. The open umbrella was a kind of protest, a visual protest. It wasn't a protest of any of John Kennedy's policies as president. It was a protest at the appeasement policies of Joseph P. Kennedy, John Kennedy's father, when he was ambassador to the court of St James in 1938-1939. It was a reference to Neville Chamberlain's umbrella.
I read that and I thought, this is just wacky enough, it has to be true. And I take it to be true.
What it means is that if you have any fact which you think is really sinister, really obviously a fact which can only point to some sinister underpinnings, hey, forget it, man, because you can never on your own think up all the non-sinister, perfectly valid explanations for that fact.
A cautionary tale.
The Kepler Space Telescope has been in orbit looking for planets around other stars since 2009, and it's started to find some startlingly interesting solar systems out there.
Today, the Kepler team announced the discovery of star system Kepler 62, a group of five planets circling a red star, two of which may be capable of supporting life. That doubles the number of Earth-like planets in the habitable zone that Kepler has confirmed in the cosmos. And they're the smallest, and therefore closest to Earth size, that astronomers have detected. The system is 1,200 light years away.
This is remarkably exciting. Not only do we know about two more Earth-like planets out there, but they're in the same solar system! That sent at least one scientist into the kind of reverie that I've been having since I heard the news.
"Imagine looking through a telescope to see another world with life just a few million miles from your own, or having the capability to travel between them on regular basis," Kepler team member Dimitar Sasselov of Harvard told New Scientist. "I can't think of a more powerful motivation to become a space-faring society."
While scientists have found that our galaxy is teeming with planets, it takes longer to detect planets that take a long time to orbit their suns. That's because Kepler detects planets when they pass in front of their stars. If a planet takes a couple hundred Earth-days to go around its sun, the scientists need several years to gather several transits, as they're known.
NASA's Bill Borucki, the mission's principal scientific investigator and a tireless proponent of this misson for years, was understandably excited about the discoveries.
"The detection and confirmation of planets is an enormously collaborative effort of talent and resources, and requires expertise from across the scientific community to produce these tremendous results," Borucki said in a NASA release. "Kepler has brought a resurgence of astronomical discoveries and we are making excellent progress toward determining if planets like ours are the exception or the rule."
The search for planets like our own is one of the science's most exciting frontiers, and after years of waiting for the discovery of Earth-like planets, we're finally getting them. This one was published in the journal Science. It's also worth noting that Borucki's team announced another planetary system surrounding a star like our own that harbors one Earth-like planet. It was a big day for those awaiting news of other planets capable of supporting life.
Sign up to receive our free newsletters