Many times, at parties and in other conversations over the years, I
have vociferously defended fellow journalists against charges of bias
in their work. Particularly journalists working in the lowly field of
print journalism, as opposed to TV.
everyone in the field is perfect, unbiased, or even a good reporter.
And not that I haven't ever encountered an editor who really, really
wanted a story to say "X" as opposed to "Y." I remember one editor who
complained that a story I'd done about NASA test pilots didn't make
them sound like the wild cowboys he imagined they were.
(Unfortunately--or fortunately--the truth about test pilots is, they're
not cowboys. They're precision engineers and very calculated
risk-mitigators, hitting test cards with calm, methodical accuracy. The
risk isn't in their attitude. It's in the inherent hazards of testing
new technology under real conditions for the first time.)
within those caveats, I've always maintained that the majority of
professional print journalists, anyway, try very, very hard to get the
story right. But recently, I had an experience that gave me a new
perspective on the issue.
A few weeks ago, I
attended the public launch of a company's product that had, until that
point, been kept tightly under wraps. The product involved a
breakthrough approach and new technology that had the potential of
having a revolutionary impact on its industry, as well on consumers
around the world. Unlike most of the journalists covering the event, I
was not an expert on that particular industry. It wasn't my normal
"beat." The reason I was there was because I'd been interviewing the
company's CEO over the previous several months for a book project. But
that also meant that while I wasn't an expert about the industry in
general, I was in the odd position of knowing more about the company's
"secret" product than any other journalist in the room.
was an eye-opening experience. A lot of major news outlets and
publications were represented at the press conference following the
announcement. A few very general facts about the product had been
released, but the reporters had only been introduced to details about
it a half hour earlier. There was still a lot about how it worked, how
it differed from other emerging products, and why the company felt so
confident about its evolution and economic viability, that remained to
But the reporters' questions
weren't geared toward getting a better understanding of those points.
They were narrowly focused on one or two aspects of the story. And from
the questions that were being asked, I realized--because I had so much
more information on the subject--that the reporters were missing a
couple of really important pieces of understanding about the product
and its use. And as the event progressed, I also realized that the
questions that might have uncovered those pieces weren't being asked
because the reporters already had a story angle in their heads and were
focused only on getting the necessary data points to flesh out and back
up what they already thought was the story.
is always a tension, as a journalist, between asking open-ended
questions that allow an interview subject to explain something and
pressing or challenging them on accuracy or details. But if you think
you already know the subject, or already have a story angle half-formed
in your head, it's easy to overlook the first part.
journalists at the press conference didn't have a bias as the term is
normally used; that is, I didn't get the sense that they were
inherently for or against the company or its product. They just
appeared to think they knew the subject well enough, or had a set
enough idea in their heads as to what this kind of story was about,
that they pursued only the lines of questioning necessary to fill in
the blanks of that presumed story line. As a result, they left the
press conference with less knowledge and understanding than they
otherwise might have had. And while nobody could have said the
resulting stories were entirely wrong, they definitely suffered
from that lapse. Especially, as might be expected, when it came to the
predictions they made about the product's evolution or future.
In his new book, How We Decide,
Jonah Lehrer cites a research study done by U.C. Berkeley professor
Philip Tetlock. Tetlock questioned 284 people who made their living
"commenting or offering advice on political and economic trends,"
asking them to make predictions about future events. Over the course of
the study, Tetlock collected quantitative data on over 82,000
predictions, as well as information from follow-up interviews with the
subjects about the thought processes they'd used to come to those
His findings were surprising.
Most of Tetlock's questions about the future events were put in the
form of specific, multiple choice questions, with three possible
answers. But for all their expertise, the pundits' predictions turned
out to be correct less than 33% of the time. Which meant, as Lehrer
puts it, that a "dart-throwing chimp" would have had a higher rate of
success. Tetlock also found that the least accurate predictions were
made by the most famous experts in the group.
Why was that? According to Lehrer,
"The central error diagnosed by Tetlock was the sin of certainty,
which led the 'experts' to impose a top-down solution on their
decision-making processes ... When pundits were convinced that they
were right, they ignored any brain areas that implied they might be
Tetlock himself, Lehrer
says, concluded that "The dominant danger [for pundits] remains hubris,
the vice of closed-mindedness, of dismissing dissonant possibilities
A friend of mine who's an editor at the New York Times
said those results don't surprise him at all. "If you watch a White
House press conference," he said, "you can tell who the new reporters
are. They're often the ones who ask the best questions." I must have
looked a little surprised. "Seriously," he said. "I actually think we
should rotate reporters' beats every two years, so nobody ever thinks
they're too much of an expert at anything."
an interesting idea. There's some advantage to having good background
in a subject, of course. For one thing, it takes a lot less time to
research and write a story if you at least know the general subject
matter and have tracked news developments in it over a period of time.
And while an expert can miss information because they assume they
already know what there is to know, a newcomer can miss information
from not knowing enough to know what there is to ask.
a tricky balance to try to strike--in part because assuming we know the
salient points of a topic or story isn't an obvious, conscious bias as
most people define or understand the term. Indeed, "practically all" of
the professionals in Tetlock's study claimed, and no doubt believed,
that they were dispassionately analyzing the evidence. But it's a
reminder that we all have, as Tetlock put it, the potential to become
"prisoners of our preconceptions." And that sometimes, even if we think
we know the story, it might be worth asking questions as if we don't.
Every now and then, we might hear or learn something that, as long as
we're open to hearing it, might change our minds about what the real
Emma Perrier was deceived by an older man on the internet—a hoax that turned into an unbelievable love story.
Emma Perrier spent the summer of 2015 mending a broken heart, after a recent breakup. By September, the restaurant manager had grown tired of watching The Notebook alone in her apartment in Twickenham, a leafy suburb southwest of London, and decided it was time to get back out there. Despite the horror stories she’d heard about online dating, Emma, 33, downloaded a matchmaking app called Zoosk. The second “o” in the Zoosk logo looks like a diamond engagement ring, which suggested that its 38 million members were seeking more than the one-night stands offered by apps like Tinder.
She snapped the three selfies the app required to “verify her identity.” Emma, who is from a volcanic city near the French Alps, not far from the source of Perrier mineral water, is petite, and brunette. She found it difficult to meet men, especially as she avoided pubs and nightclubs, and worked such long hours at a coffee shop in the city’s financial district that she met only stockbrokers, who were mostly looking for cappuccinos, not love.
In most big cities, cabbies aren’t allowed to turn away passengers because of their race or destination—but it still happens all the time.
At close to midnight, I had just gotten off of the plane at LAX after a long journey from my home in Hartford. Two hours before departure I had picked up my kids from school and then kissed my family goodbye. Travel is an inextricable part of my job as a baseball analyst with ESPN. This particular week was Los Angeles; the week before was Pittsburgh. So go my Wednesdays.
On my way to Los Angeles, I connected in Minneapolis where I was joined by Joe Vanderford, an ESPN cameraman who hailed from North Carolina. We had a great chat en route to L.A. Joe, who is white, told me how his father had been responsible for integrating the softball league in his hometown to which I responded that his father had been brave. He agreed, but added that his father had also loved the game and wanted to win; before integration, there had been some great players barred from playing because of race.
DeepMind’s new self-taught Go-playing program is making moves that other players describe as “alien” and “from an alternate dimension.”
It was a tense summer day in 1835 Japan. The country’s reigning Go player, Honinbo Jowa, took his seat across a board from a 25-year-old prodigy by the name of Akaboshi Intetsu. Both men had spent their lives mastering the two-player strategy game that’s long been popular in East Asia. Their face-off, that day, was high-stakes: Honinbo and Akaboshi represented two Go houses fighting for power, and the rivalry between the two camps had lately exploded into accusations of foul play.
Little did they know that the match—now remembered by Go historians as the “blood-vomiting game”—would last for several grueling days. Or that it would lead to a grisly end.
Early on, the young Akaboshi took a lead. But then, according to lore, “ghosts” appeared and showed Honinbo three crucial moves. His comeback was so overwhelming that, as the story goes, his junior opponent keeled over and began coughing up blood. Weeks later, Akaboshi was found dead. Historians have speculated that he might have had an undiagnosed respiratory disease.
On Monday, Trump set out to emphasize honor and integrity—and then he made a series of unsubstantiated claims.
The week of October 15 was supposed to be set aside to reflect on character.
“We celebrate National Character Counts Week because few things are more important than cultivating strong character in all our citizens, especially our young people,” President Trump said in declaring it. “The grit and integrity of our people, visible throughout our history, defines the soul of our Nation. This week, we reflect on the character of determination, resolve, and honor that makes us proud to be American.”
There hasn’t been much time to talk about character. Instead, politics this week has been dominated by a peculiar scandal, beginning with one off-base remark from the president on Monday, that has managed to somehow leave everyone it touches worse off than they were at the start of the week—including the president, his chief of staff and spokeswoman, a member of Congress, and the family of a Special Forces soldier killed in Niger earlier this month.
Old French Canadian genealogy records reveal how a harmful mutation can hide from natural selection in a mother's DNA.
The first King’s Daughters—or filles du roi—arrived in New France in 1663, and 800 more would follow over the next decade. Given their numbers, they were not literally the king’s daughters of course.
They were poor and usually of common birth, but their passage and dowry were indeed paid by King Louis XIV for the purpose of empire building: These women were to marry male colonists and have many children, thus strengthening France’s hold on North America.
And so they did. The filles du roi became the founding mothers of French Canadians, for whom these women are a source of historical pride. A grand old restaurant in Montreal was named after the filles du roi. So is a roller-derby team. French Canadians can usually trace their ancestry back to one or more of these women. “French Canadian genealogy is so well documented, it’s just a piece of cake to trace any line you have,” says Susan Colby, a retired archaeologist who comes from a French Canadian family and has done some of that tracing herself.
Rumors are swirling over what took place in the final hours before four U.S. servicemen died—but a clear picture of what actually took place is only beginning to emerge.
On October 4, a small group of U.S. troops were preparing to leave a meeting with community leaders near the small town of Tongo Tongo in Niger. They were close to the Malian border, traveling in unarmored pick-up trucks with limited weaponry and a few dozen of their Nigerien counterparts. Then they were ambushed.
By the time the more than 30-minute assault was over, three U.S. troops were confirmed dead and two more were gravely injured. Another, Sergeant La David Johnson, was missing and his body would not be recovered for another two days. French aircraft, called in for back-up, circled overhead as fire was exchanged below. They later helped to evacuate survivors.
This account, based on public statements from the Trump administration, interviews with U.S. Africa Command officials; former State Department and intelligence officials; and the man who almost served as the senior director for Africa on the National Security Council, along with additional reporting from other news outlets like CNN and The Washington Post, suggests a direct link between the fatal ambush and the absence of a clear strategy or perhaps even a cursory understanding of U.S. operations in Africa by the Trump administration.
Two centuries ago, America pioneered a way of thinking that puts human well-being in economic terms.
Money and markets have been around for thousands of years. Yet as central as currency has been to so many civilizations, people in societies as different as ancient Greece, imperial China, medieval Europe, and colonial America did not measure residents’ well-being in terms of monetary earnings or economic output.
In the mid-19th century, the United States—and to a lesser extent other industrializing nations such as England and Germany—departed from this historical pattern. It was then that American businesspeople and policymakers started to measure progress in dollar amounts, tabulating social welfare based on people’s capacity to generate income. This fundamental shift, in time, transformed the way Americans appraised not only investments and businesses but also their communities, their environment, and even themselves.
More comfortable online than out partying, post-Millennials are safer, physically, than adolescents have ever been. But they’re on the brink of a mental-health crisis.
One day last summer, around noon, I called Athena, a 13-year-old who lives in Houston, Texas. She answered her phone—she’s had an iPhone since she was 11—sounding as if she’d just woken up. We chatted about her favorite songs and TV shows, and I asked her what she likes to do with her friends. “We go to the mall,” she said. “Do your parents drop you off?,” I asked, recalling my own middle-school days, in the 1980s, when I’d enjoy a few parent-free hours shopping with my friends. “No—I go with my family,” she replied. “We’ll go with my mom and brothers and walk a little behind them. I just have to tell my mom where we’re going. I have to check in every hour or every 30 minutes.”
Those mall trips are infrequent—about once a month. More often, Athena and her friends spend time together on their phones, unchaperoned. Unlike the teens of my generation, who might have spent an evening tying up the family landline with gossip, they talk on Snapchat, the smartphone app that allows users to send pictures and videos that quickly disappear. They make sure to keep up their Snapstreaks, which show how many days in a row they have Snapchatted with each other. Sometimes they save screenshots of particularly ridiculous pictures of friends. “It’s good blackmail,” Athena said. (Because she’s a minor, I’m not using her real name.) She told me she’d spent most of the summer hanging out alone in her room with her phone. That’s just the way her generation is, she said. “We didn’t have a choice to know any life without iPads or iPhones. I think we like our phones more than we like actual people.”
A trove of recently released documents confirms that Washington’s role in the country’s 1965 massacre was part of a bigger Cold War strategy.
In Indonesia in October 1965, General Suharto responded to the kidnapping and murder of six high-ranking military officers by accusing the Indonesian Communist Party (PKI) of organizing a brutal coup attempt. Over the months that followed, he oversaw the systematic extermination of up to a million Indonesians for affiliation with the party, or simply for being accused of harboring leftist sympathies. He then took power and ruled as dictator, with U.S. support, until 1998.
This week, the non-profit National Security Archive, along with the National Declassification Center, published a batch of U.S. diplomatic cables covering that dark period. While the newly declassified documents further illustrated the horror of Indonesia’s 1965 mass murder, they also confirmed that U.S. authorities backed Suharto’s purge. Perhaps even more striking: As the documents show, U.S. officials knew most of his victims were entirely innocent. U.S. embassy officials even received updates on the executions and offered help to suppress media coverage. While crucial documents that could provide insight into U.S. and Indonesian activities at the time are still lacking, the broad outlines of the atrocity and America’s role are there for anyone who cares to look them up.
Michelle Kuo’s Reading with Patrick avoids the educator-as-savior cliché and opts for a subtler portrait of her relationship with a troubled student.
In books and films about failing schools attended by poor students of color, a suspiciously upbeat plotline has become all too familiar. A novice teacher (usually white) parachutes in, overcomes her students’ distrust and apathy, and sets them on the path to college and worldly success. Such narratives are every kind of awful. They make the heroic teacher the center of attention, relegating the students to secondary roles. They pretend that good intentions and determination have the magical power to transform young people’s lives, even in the most adverse circumstances. And they treat schools as isolated sites of injustice, never connecting educational disadvantage to other forms of inequality.