Alexis Madrigal is a senior editor at The Atlantic, where he oversees the Technology channel. He's the author of Powering the Dream: The History and Promise of Green Technology. More
The New York Observer calls Madrigal "for all intents and purposes, the perfect modern reporter." He co-founded Longshot magazine, a high-speed media experiment that garnered attention from The New York Times, The Wall Street Journal, and the BBC. While at Wired.com, he built Wired Science into one of the most popular blogs in the world. The site was nominated for best magazine blog by the MPA and best science Web site in the 2009 Webby Awards. He also co-founded Haiti ReWired, a groundbreaking community dedicated to the discussion of technology, infrastructure, and the future of Haiti.
He's spoken at Stanford, CalTech, Berkeley, SXSW, E3, and the National Renewable Energy Laboratory, and his writing was anthologized in Best Technology Writing 2010 (Yale University Press).
Madrigal is a visiting scholar at the University of California at Berkeley's Office for the History of Science and Technology. Born in Mexico City, he grew up in the exurbs north of Portland, Oregon, and now lives in Oakland.
If you want to stop social networking services from exploiting your likeness for advertising, you've got to start paying up.
Some or all of the Service may be supported by advertising revenue. To help us deliver interesting paid or sponsored content or promotions, you agree that a business or other entity may pay us to display your username, likeness, photos (along with any associated metadata), and/or actions you take, in connection with paid or sponsored content or promotions, without any compensation to you.
Note the key parenthetical -- "(along with any associated metadata)" -- which you could read as "location data." In essence, if you go to the Palms in Las Vegas and snap a pic...
Not that any of this is all that surprising. It's a free service that's been focused on building user engagement, et cetera, in hopes of selling that engagement to advertisers.
Which reminds me of this wonderful mini-rant from Pinboard's Maciej Ceglowski, who identifies the key problem:
To avoid this problem, avoid mom-and-pop projects that don't take your money! You might call this the anti-free-software movement.
If every additional user is putting money in the developers' pockets, then you're less likely to see the site disappear overnight. If every new user is costing the developers money, and the site is really taking off, then get ready to read about those synergies.
To illustrate, I have prepared this handy chart:
Free Paid Stagnant losing money making money Growing losing more money making more money Exploding losing lots of money making lots of money
Under these conditions, companies have to sell themselves because they do not have a sustainable business. And when they're sold, they either A) get shut down or B) become part of an advertising machine, like Facebook's.
Truly, the only way to get around the privacy problems inherent in advertising-supported social networks is to pay for services that we value. It's amazing what power we gain in becoming paying customers instead of the product being sold.
Here's an alternative version of what Instagram could have done before Facebook purchased them. Instagram has, what, 100 million users? If they got $5 a month from 20 million of those users, they'd be looking at $300 million in quarterly revenue. That's a nice chunk of change when you have a baker's dozen employees. You think those guys could split more than a billion dollars a year and call it good. Or hell, make the user numbers an order of magnitude smaller: 2 million out of 10 million users. That's still $30 million dollars a quarter for 13 guys.
Prior to President Johnson's decision, instructions for the emergency use of nuclear weapons that both he and his predecessors had previously approved stipulated a full-scale nuclear counter-attack even if the initial strike were conventional, or the result of an accident, and both Communist giants would be targeted regardless of whether either of them had launched the first strike.
Which just so happens to be a key plank in most greens' vision of the energy future.File this under unlikely foes: James Hughes, CEO of First Solar, the largest American photovoltaics maker, doesn't buy a big part of most green advocates' vision for the future. In a deep (and very wonky) interview with Australia's Renew Economy, Hughes said that he doesn't think rooftop, distributed solar will disrupt our current centralized system of electricity production and distribution.
I am not a huge believer ... in the idea that the centralized model of energy distribution is outdated and a more distributed model is what makes sense going forward. Some degree of distributed generation does make sense. I believe community solar has a bright future. I believe off-grid has a bright future.
Taking it to the residential level, the difficulty is that storage is very expensive, and the difficulty is that unless you disconnect from the grid and use storage, then there is a huge subsidy inherent in a metering type of model. You also ignore the patterns of industrial use and the synergies between residential usage patterns and industrial usage patterns, and you lose that synergy when you got to highly distributed model, and I simply don't believe that the synergies of generation at a household level overcome the value of the generation on a centralized basis.
There is another reason, though, that he mentioned elsewhere in the interview. First Solar is having a lot more success selling into the more demanding utility market than residential and commercial installations.
[W]e believe we have a competitive advantage in engineering and power production stand point. We deliver a product that is well engineered and can deliver a highly predictable result for the customer. The rooftop market, the installations are too small and there are too many variables to do a system level power prediction with that degree of precision. In addition, you don't have meters and diagnostics to measure the output, so the quality measure for the rooftop panel is what does it flash test at, and the end user doesn't really know if they got what they paid for - you have different levels of soiling, of shading, of insulation, uncertainty over ambient temperature, variability in installation circumstances.
I would rather sail into a market where there's a much stricter quality standard, as opposed to one that is more commoditised. And there are some major structural issues in most of the regulatory systems around the world with respect to rooftop installations. They are being facilitated with either net metering of feed in tariff programs. Neither of those, long term, is sustainable. You have to fundamentally restructure the regulatory system if you really want to accommodate rooftop at any sort of significant penetration level.
I don't have too much commentary to add here other than to say: the politics of the major renewable energy players consistently surprise me.
* Thanks to Lee Kasten for pointing out that I needed to clarify my language here on precisely which industry I was talking about.
You never imagine it happens on a sunny day.
In this footage from Homs, Syria, a man trains a camera on a government jet flying overhead as it drops a bomb on his town no more than a mile from where he's standing.
This kind of video was not available in previous conflicts in close to real-time. I'm not sure that its availability is changing the nature of warfare, but it sure transforms the empathic experience of thinking about what war is like.
And it's terrifying.Via Philip Bump
Hallelujah!Google just sent word that Google's hotly anticipated maps app for iOS is now available in the iTunes store. Oh, and it's got voice-guided turn-by-turn directions, among other features.
We're looking back at the photos that defined the sociotechnical changes of the year.
For the last five years, the American drone fleet has been growing. The military now has more than 7,500 unmanned aerial vehicles, including everything from tiny, hand-launched Ravens to large, armed Predators. Then there's all the CIA and Homeland Security drones.
But in 2012, we began to see what should be obvious: the drone business will be global. While drone strikes are an American strategy now, these machines are relatively cheap and will only get more so. As we've said before, everyone who wants a drone will have one of some kind.
Most specifically, drones are not like nuclear weapons with a few countries controlling the resources to create them. No, the technologies for dronemaking will be widely distributed. Israel, a long-time manufacturer of UAVs, has beefed up its exports. Azerbaijan, for example, cut a $1.4 billion arms deal with Israel in February; dozens of drones were included in the package. Expect more and more alliances like this as these robots spread out across the globe.
Could human and machine forecasters work together to increase the intelligence agencies' foresight?
We would like to know what the future is going to be like, so we can prepare for it. I'm not talking about building a time machine to secure the winning Powerball number ahead of time, but rather creating more accurate forecasts about what is likely to happen. Supposedly, this is what pundits and analysts do. They're supposed to be good at commenting on whether Greece will leave the Eurozone by 2014 or whether North Korea will fire missiles during the year or whether Barack Obama will win reelection.
A body of research, however, conducted and synthesized by the University of Pennsylvania's Philip Tetlock finds that people, not just pundits but definitely pundits, are not very good at predicting future events. The book he wrote on the topic, Expert Political Judgment: How Good Is It? How Can We Know?, is a touchstone for all the work that people like Nate Silver and Princeton's Sam Wang did tracking the last election.
But aside from the electorate, who else might benefit from enhanced foresight? Perhaps the people tasked with gathering information about threats in the world.
You probably have never heard of IARPA, but it's the wild R&D wing of our nation's intelligence services. Much like the Defense Advanced Research Projects Agency, which looks into the future of warfare for the Department of Defense, the Intelligence Advanced Research Projects Activity looks at the future of analyzing information, spying, surveillance, and the like for the CIA, FBI, and NSA.
We wrote in-depth about a project they're running to better understand metaphors (yes, metaphors), and, now, one of their projects is to apply Tetlock's insights into expert judgment. In particular, while Tetlock found that most analysts were terrible, some were better than others, particularly those he called foxes, who were more circumspect in their pronouncements and less wedded to a hard-and-fast worldview. The work suggested that it might be possible to improve people's judgments about the future.
His work matched up perfectly with a call for proposals that IARPA put out two years ago for a new program called ACE, Aggregative Contingent Estimation. They wanted researchers to "develop and test tools to provide accurate, timely, and continuous probabilistic forecasts and early warning of global events, by aggregating the judgments of many widely dispersed analysts." Well, Tetlock thought, perhaps I can apply my research to this problem.
So, after his proposal was selected, IARPA paid for he and his team to recruit 3,000 volunteers, who each agreed to participate in forecasting tournaments that asked them to make specific, testable predictions about the future and then provided them feedback. They are competing against four other teams who were also funded by IARPA to see who can forecast the best. Just within the year and a half that the research study has been running, Tetlock found that people could better, much better, at making predictions than he thought possible.
Tetlock discussed the work in an excellent interview with Edge.org last week. Here's how he described it:
Is world politics like a poker game? This is what, in a sense, we are exploring in the IARPA forecasting tournament. You can make a good case that history is different and it poses unique challenges. This is an empirical question of whether people can learn to become better at these types of tasks. We now have a significant amount of evidence on this, and the evidence is that people can learn to become better. It's a slow process. It requires a lot of hard work, but some of our forecasters have really risen to the challenge in a remarkable way and are generating forecasts that are far more accurate than I would have ever supposed possible from past research in this area.
But here's the really fascinating thing from a technological perspective. Tetlock's work has shown in the past that computers tend to be better forecasters than humans, with some key exceptions. But what about the combination of humans and machines? What if cyborg forecasting is the way to go? That's precisely what Tetlock suggests in the Edge interview, although he hedges it carefully (like the fox that he is).
We don't have geopolitical algorithms that we're comparing our forecasters to, but we're turning our forecasters into algorithms and those algorithms are outperforming the individual forecasters by substantial margins. There's another thing you can do though and it's more the wave of the future. And that is, you can go beyond human versus machine or human versus algorithm comparison or Kasparov versus Deep Blue (the famous chess competition) and ask, how well could Kasparov play chess if Deep Blue were advising him? What would the quality of chess be there? Would Kasparov and Deep Blue have an FIDE chess rating of 3,500 as opposed to Kasparov's rating of, say, 2,800 and the machines rating of, say, 2,900? That is a new and interesting frontier for work and it's one we're experimenting with.
In our tournament, we've skimmed off the very best forecasters in the first year, the top two percent. We call them "super forecasters." They're working together in five teams of 12 each and they're doing very impressive work. We're experimentally manipulating their access to the algorithms as well. They get to see what the algorithms look like, as well as their own predictions. The question is-do they do better when they know what the algorithms are or do they do worse?
There are different schools of thought in psychology about this and I have some very respected colleagues who disagree with me on it. My initial hunch was that they might be able to do better.
It seems to be that IARPA might be happy with this work, but it remains to be seen whether it will actually get applied within the intelligence agencies. (Anyone seen Homeland? Saul is a total fox who threatens the hierarchy.)
Tetlock is convinced that his work has the potential to be dangerously destabilizing to bureaucracies of all types. Excitingly (at least to some of us), he sees these kinds of tools spreading throughout different kinds of organizations, which could have wide-ranging impacts on the existing hierarchies. From the nation's far-out spy researcher to your local middle manager, get ready for forecasting tournaments.
The long and the short of the story is that it's very hard for professionals and executives to maintain their status if they can't maintain a certain mystique about their judgment. If they lose that mystique about their judgment, that's profoundly threatening. My inner sociologist says to me that when a good idea comes up against entrenched interests, the good idea typically fails. But this is going to be a hard thing to suppress. Level playing field forecasting tournaments are going to spread. They're going to proliferate. They're fun. They're informative. They're useful in both the private and public sector. There's going to be a movement in that direction. How it all sorts out is interesting. To what extent is it going to destabilize the existing pundit hierarchy? To what extent is it going to destabilize who the big shots are within organizations?
Whole new categories of weird noise are being introduced into the news world as a result of Google's algorithm, whatever its virtues.
If something comes out of a computer on the basis of statistics, it must be objective, right? No bias is even possible, unlike the judgment of us flawed Homo sapiens!
But... that's not actually true. Over at Nieman Journalism Lab, Nick Diakopoulos has a great story about the ways that various algorithms introduce biases that are different from the human ones, but no less real. Looking at Google News, Circa, IBM Research, and a crop of other automated information tools, here's his thesis:
It can be easy to succumb to the fallacy that, because computer algorithms are systematic, they must somehow be more "objective." But it is in fact such systematic biases that are the most insidious since they often go unnoticed and unquestioned.
Even robots have biases.
His story is well worth reading for the ways in which it shows how many algorithms are now at play in the news ecosystem and the potential they have for bending the information people receive in one way or another.
What I want to discuss, though, is how the rather simple application of a series of rigid rules can introduce new and bad behaviors on the part of human actors who realize that they can exploit the system. Whole new categories of weird noise are being introduced into the news world as a result of Google's algorithm, whatever its virtues.
Because the rules are quite rigid, e.g. newer is *always* better, different organizations try to have the newest stories about a given popular event. So, in the lead up to the early December snowstorm here in California, the Weather Channel's website published a great preview of the storm on November 29th or 30th. I read it on or about when it came out. *After* the storm on December 3rd, I went looking to see which of the predictions from the story had come true. I popped a few search terms into Google News and lo and behold, there was a December 3rd story from the Weather Channel. Excitedly, I clicked through the link and found ... the exact same preview with a timestamp that now read December 3, 2012, 9:08 AM.
Keep in mind that this now makes the story completely nonsensical. It is a preview of an event dated after that event has already passed. It's like a story dated November 7th story about who might win the presidential election. A Christmas preview on December 29th.
In short, this is lunacy! At least to a human.
But to a machine, this looks like a "fresh" story with lots of keywords about the Shasta snowfall. The machine can't tell that the article is written in the future tense or that it is worse than useless now. This type of thing actively degrades the news ecosystem, and it's only happening because of the way that Google's algorithm works.
Granted, this is the lawless variety of optimizing for Google News. But there are a lot of examples and techniques that have developed solely because of the way the algorithm works. If you really want to peer down the rabbit hole, take a look at the depth of the analysis in this series of posts on the "Top 10 Most Important Google News Ranking Factors." It was assembled by a team of people at some top publications, agencies, and SEO shops. Keep in mind that some of these optimizations benefit human beings. Punny headlines are slowly dying, and I'm OK with that. But other factors that Google is looking for -- like keyword density -- reward people who write the way that everyone else does, using the same words and using them frequently. Google also rewards specialists over generalists. If you (as author or site) publish a ton on one thing, you're more likely to move up the rankings than if you take a more horizontal view of a field (say, technology). And lastly, take a look at the Google News front page now. It's almost exclusively traditional media outlets. It's actually shocking how little at least I see from media entities created after 2004. There's a shocking apolitical conservatism to however Google's algorithm works.
My point in discussing these details at such length is to strengthen Diakopoulos' point about the lack of "objectivity" in algorithmic operations. Even if one could design some perfectly balanced system that had no observable bias when it began to run, the people who are producing the inputs to the system and dependent on the outputs will begin to adjust their behavior. They'll change to make themselves more legible to the machine, and those that do so best will prosper.
Simply look at what happens with wire stories, say today's on Alan Alda. The local news sites have no disincentive to publish this work, regardless of whether their audience wants to read it. Some TV station in Charlotte might run a few dozen AP stories every few hours not because they improve the news system but because there is almost no cost to doing so, and it might get picked up by Google News and drive more traffic than a week's worth of regular content. Combined, all those individual decisions end up sending a signal to Google that a story is *really important* when there is no real signal; all we have is the aggregate hopes of news publishers for a low-probability traffic spike.
I don't think Google News has ever taken enough responsibility for the cybernetics of the system it created. What is important is not just how the software works, but the ways it structures humans' thoughts and actions in new ways. To my eye, it created feedback loops with mostly deleterious effects not because the algorithm itself was bad, but because the service did not take the human repercussions seriously. That's not what Krishna Bharat, the product's creator, set out to do. But it's happened.
"Weird to think an error in some data center can reach its quavering tentacle into your laptop and bring down one of your apps."
In today's episode of The Cloud's Unexpected Consequences, I've now heard from dozens and dozens of my Twitter followers who've experienced Chrome crashes in just the last half an hour. In fact, this very blog you're reading has experienced four crashes across two correspondents. Chrome is normally stable (for us, at least). What's going on?
"Creativity is not a process, right?"
Bloomberg Businessweek has a loooong interview with Apple CEO Tim Cook. There's a lot of meat, too. The magazine's editor, Josh Tyrangiel, did the Q&A, and it has several interesting dimensions (including why Cook is talking to reporters at all).
What makes Apple special is that they create category-defining products. Even if their competitors have some key advantages (like Google's data or Amazon's online retail game), Apple's actual creative output is superior by most standards. At the very least, you have to say their execution of media and touchscreen devices has been visionary and tech path altering. And keep in mind, this is a monster company, a huge place. How can they keep cranking out the hits?
To that point, my favorite piece of the interview was when Cook took on how and why Apple comes up with consistently interesting, sometimes truly radical products.
Creativity is not a process, right? It's people who care enough to keep thinking about something until they find the simplest way to do it. They keep thinking about something until they find the best way to do it. It's caring enough to call the person who works over in this other area, because you think the two of you can do something fantastic that hasn't been thought of before. It's providing an environment where that feeds off each other and grows.
So just to be clear, I wouldn't call that a process. Creativity and innovation are something you can't flowchart out. Some things you can, and we do, and we're very disciplined in those areas. But creativity isn't one of those.
While Cook mostly defines his vision of creativity against prevailing models and hopes (i.e. that you could simply put in the right corporate structure and... INNOVATION!), he does provide a positive vision that's quite unusual. For Cook, creativity is about relentlessness. It is about people who "keep thinking" (a phrase he repeats) until they get it exactly right.
When people say (do they still say this? or is too obvious to bear witness to at this point?) that "Google doesn't get social," I think this is what they mean. Rather than finding a way to turn their most dedicated users into content creators for the larger masses of users, they just took their tools away, alienating a group that had *loved* their product. And for what? A G+ product with a huge nominal user base and a much, much, much smaller actual community. I guess the "higher ups" got their numbers.
At one point, [Google product manager Brian] Shih remembers, engineers were pulled off Reader to work on OpenSocial, a "half-baked" development platform that never amounted to much. "There was always a political fight internally on keeping people staffed on this little project," he recalled. Someone hung a sign in the Reader offices that said, DAYS SINCE LAST THREAT OF CANCELLATION. The number was almost always zero. At the same time, user growth -- while small next to Gmail's hundreds of millions -- more than doubled under Shih's tenure. But the "senior types," as [user experience designer Jenna] Bilotta remembers, "would look at absolute user numbers. They wouldn't look at market saturation. So Reader was constantly on the chopping block."
So when news spread internally of Reader's gelding, it was like Hemingway's line about going broke: "Two ways. Gradually, then suddenly." Shih found out in the spring that Reader's internal sharing functions -- the asymmetrical following model, endemic commenting and liking, and its advanced privacy settings -- would be superseded by the forthcoming Google+ model. Of course, he was forbidden from breathing a word to users. "If Google is planning on deprecating Reader, then its leaders are deliberately choosing to not defend decisions that fans or users would find indefensible," wrote [Google engineer Chris] Wetherell. Kevin Fox, the former designer, was so fearful that Reader would "fall by the wayside, a victim to fashion," that he offered to put aside his current projects and come back to Google for a few months. Shih left Google in July. Seeing the revamp for the first time, "most of us were prepared for a letdown," he blogged, "and boy, is it a disaster."
Garbage in, garbage out
So let us absorb the mass of unwanted shared personal information and images that wash over one, like some great viscous tide full of stuff one would rather not think about -- other people's need for Icelandic lumpfish caviar, their numb faces at the dentist, their waffles and sausage, their appointments with their therapists, their personal hygiene, their pimples and pets, their late babysitters, their grumpy starts to the day, their rude exchanges, their leaking roofs, their faith in homeopathy, their stressing out, and all the rest.
Now, I've been on Twitter for a long time, Facebook even longer, though in a more limited capacity. And I've never noticed these topics permeating my timeline. Let's just take a look. These are summaries of the first 20 tweets on my timeline:
1. Link to story on Wall Street job losses.
2. Link to story about startup funding.
3. Joke about Senate's Jim Demint.
4. Link to a robotic arm project created by teenagers.
5. Link to story about nuclear energy news in 2012.
6. Link to two podcasts about social science and free speech.
7. Joke about Harvard and the presidency (part of a longer conversation).
8. Link to a story about a new drilling permit in the Arctic.
9. Link to John McAfee blogging story.
10. Link to story about Jim Demint resigning.
11. Link to new Star Trek movie teaser.
12. Link to story about The Verge party.
13. Link to story poking fun at Financial Times for lame hip hop references.
14. Link to M83 music video.
15. Link to story about tax incentives in New York.
16. Link to a Montreal comedy festival.
17. Quote from a link proffered elsewhere in my timeline.
18. Analysis of recent election.
19. Difficult to parse joke.
20. Link to Polish children's books from the 1950s.
Now, you may or may not consider this an interesting set of information, but I do. And that's the point! You've got the big political news of the day (Demint), the big financial news (job cuts at Citi), a continuing big tech story (McAfee), a story about an area of interest (startup funding), a neat thing (robotic arm!), a whimsical thing (Polish children's books!), energy news, movie news, party news, new podcasts, and a music video from a band I like.
Sadly, though, there is nothing in this stream about waffles or the dentist, anyone's therapist, pimples, roofs, grumps, homeopathy, or any kind of Icelandic specialty food item.
My diagnosis is simple, Roger: your friends and associates are terrible and boring. Being that you are a smart and interesting guy who would distill only the finest information from any social network, the problem is the garbage going into your feed, which can only come out as garbage in your column. And that garbage is being created by the people who you choose to follow and know.
So, the manly, un-Generation Wuss thing to do, would be to simply stop communicating with all of your friends. You can finally stop hearing about all their loathsome life activities like getting psychological help, petting their animals, and having a hard day sometimes. Maybe stop talking with people all together. Without their insufferable problems like enjoying eating and having their teeth cleaned, you will be able to think in peace.
Perhaps you should get back to the classics of Western Civilization. I mean, does Job complain about his hard days? Does Ulysses go to the dentist? Does P.G. Wodehouse eat Icelandic lumpfish caviar? Who needs friends when you have these men of intellect and action?
Or, if you want (I'm serious here!), I'll provide you with some more detailed social media consulting, helping you create a presence that's actually useful. These tools are only as good as the network you create on them. And if you're being honest about what you see on Twitter and Facebook, you're a terrible builder.
A striking photo highlights a new facet of modern warfare.
These are five government troops in the Democratic Republic of Congo, three of whom, according to Reuters' caption, are recording video with their phones. They are in the town of Sake, near the town of Goma, which rebels captured last month.
I don't know what to say about this photograph aside from suggesting that an enterprising PhD student write a dissertation on "Cell Phones in War." How are fighting, killing, and controlling territory different when you can call your brother after battle, post a photo of your squadron on the march to Facebook, or play Angry Birds between skirmishes?
Philip Bump reminded me of this chilling CJ Chivers post from a few months ago in Syria. A soldier sits with a high-caliber machine gun, one hand on the trigger, the other on his phone. Chivers' description:
Machine gun in right hand. Cell phone in left. On duty on the gun-truck's machine gun, at 80 miles an hour into Aleppo, checking messages along the way.
Even as the war in Syria rages, large areas of the countryside have cellular phone coverage, and the fighters are constantly checking their phones. When they stop, many of them immediately look for ways to recharge their phone batteries. And, often as they move and enter an area with a strong signal, they commence texting back and forth.
Update: here's one monograph that touches on some of the military strategy concerns of ubiquitous cell usage, "YOUTUBE WAR: FIGHTING IN A WORLD OF CAMERAS IN EVERY CELL PHONE AND PHOTOSHOP ON EVERY COMPUTER."
A scientific nightmare is coming true.
The only way to even *know* what readers might like is to allow them to read and share on the open Internet.
Knowing what it's like inside a media company, let me state up front that there are epistemological problems in deciding from the outside why The Daily failed. I've yet to see an article about The Atlantic that understands how our site works under the hood, and why that has led to a relatively successful few years.
That said, I do have a few thoughts about The Daily that I'd like to offer alongside my colleague Derek Thompson's.
1. The Daily was built on the false premise of a "general reader." When I hear general reader, I think that a media executive is imagining himself and his friends (you know, normal guys) and intending to produce a bundle of content for that hyperspecific DC-to-Boston-went-to-a-good-college-polo-shirts-and-grilling demographic. As a result, anything that falls outside the boundaries of the interests of this presumed Joe Six Pack will be deemed too "cool, quirky, nerdy, obsessive or snarky." That's not to say that such a publication will never run such pieces. My old pal Ben Carlson wrote quite a few. (So did Zach Baron and Sarah Weinman.) But you're fighting the institutional gravity. And you'll have to build defenses into the story that are the writerly equivalent of "I don't meant to nerd out here, but..." Because otherwise the bros running the grill will throw you in the pool.
This is not to say that media properties cannot be built with the goal of reaching the mainstream, if by mainstream we mean very large audiences. This, very clearly, can be done. (See: HuffPo, Gawker, etc.) But! And this is a big but! These sites have been built up like sedimentary rock from a bunch of smaller microaudiences. Layers of audience stack on top one another to reach high up the trafficometer. The various voices of their bloggers attract layers of readers. New York attracts a different layer of readers. Left-wingish attracts yet another. Their big investigative pieces add more. And their super niche pieces sometimes explode -- say, Matt Buchanan's tech truth bomb -- precisely because they originate inside a niche. It doesn't feel like your uncle from Evanston is telling you the latest thing about iPhones or queer dance or SEC football.
2. The Daily was built on the false premise it could control the distribution outlet. Being in media is terrifying right now. Whereas in the old days, you wrote something and then a fleet of people printed it and handed it to X hundred thousand people so they would read it, now, the fleet is gone. You are alone out there in the ocean and there's not much that anyone can do for any given story to make sure that people read it. Seriously, since the fall of '08 vintage Digg, there's not much anyone on your Internet's favorite websites can do aside from stick a story on the homepage, tweet/Facebook/tumble/Reddit/LinkedIn it and then pray. We do not control the distribution of our work. Period. It's horrible and bizarre and it is also the way that the media world works now. You can't push; the content has to pull. (Especially if we are talking numbers in the millions.)
3. Which brings us to the cardinal sin, The Daily was not tuned for sharing. Obviously: it was mostly locked up on the iPad behind a paywall. Less obviously: how would anyone build an audience growth strategy without relying on huge social hits and a distinctive voice? How many content sites or apps have you visited as a result of marketing? Then how many did you go back to? Without the actual mechanism of HUMANS DROPPING A PAPER AT YOUR DOOR, there's just no way to keep people coming back without A) a voice/POV/knowledge base that is impossible to find elsewhere B) massive traction day after day after day after day after day after day after day in the social world, light and dark forms alike. I haven't seen a digital media play succeed in the last five years without both of these factors.
It's not a technical proposition to get people to share your content. Sure, there are things you can do that marginally improve your share rates by placing the buttons in the right places, etc. But the main determinant of social sharing is the quality, tone, and form of your stories. The only way to figure out what works -- because it's constantly evolving -- is to keep sending work you love out into the ecosystem and seeing what gets amplified. Then, you take that feedback, write another thing you love and send it back into the field. If a bunch of stuff sits behind a paywall, that iterative cycle gets broken. You don't know what's working.
Let me give you a really short example from our work. There's a classic longform convention in profiles of people. The writers tend to drop in to the story sitting with their subjects. They describe what they look like and provide some color about the situation: are they eating? how'd they get there? was the publicist a stickler? does the person appear to be on drugs? Sometimes we get a very short quote from the profilee that is indicative of the person's affect and intellect. Then, the story, by which I mean the action, really begins. In particular, the stakes are explained lower down in the story, several grafs in. These description-rich ledes come first because that is how it is done (and it can be artful as hell when done perfectly).
Well, we found this sort of thing bombs for us over and over. Maybe we don't profile the right people (disagree!). Maybe we're terrible at writing these sorts of ledes (perhaps!). Or maybe, just maybe, the form doesn't work very well to capture the attention of people who are clicking through from an email, IM, Facebook, Twitter, Tumblr, or Reddit. You just don't know what's happening for several hundred words in some cases. And before the writer answers why you're reading, that person is gone.
So, we switched it up. Rather than say, "Well, long, narrative pieces of writing suck for us," we changed the ordering of our narratives. We get the stakes up very, very high. I'm talking within the first 100 or 200 words, even if the story is two or three or four thousand words. And if we want to keep that narrative lede and we know we have nuggets down low, we drop in a tl;dr box to capture the kind of reader who wants to know what the point is exactly. (If you're a regular reader, you will start to notice this pattern.)
Now, this strategy might not work for other people. Hell, it might not keep working for us. But we have a high hit rate with the pieces that we put extra time and special effort into. And that's the feedback loop you want; that's the one that makes us happy. But the only way to even *know* what readers might like is to allow them to read and share those pieces on the open Internet.
The magnitude of public funds at stake add some urgency to improving understanding of the extent and characteristics of knowledge spillovers from learning by doing. The main results here imply that policies that enhance demand are necessary to generate sufficient knowledge from experience. Other insights from this case--especially depreciation and diminishing returns--heighten the value of policy instruments with performance-oriented mechanisms and longevity. That experience-derived knowledge appears to be so ephemeral suggests that we should also consider explicit support for codification and transfer of what is learned.
What better way to respond to a faux-personal email than with a faux-love letter?It started on September 10, 2012, when "Barack" sent Dylan Hansen-Fliedner an email touting Obama for America's fundraising success.
Sign up to receive our free newsletters