In his just-released book The Last Train From Hiroshima, Charles Pellegrino quotes one of the survivors of the Hiroshima and Nagasaki atomic bomb blasts as saying that those who survived were, in general, those who looked after their own safety, instead of reaching out to help others. "Those of us who stayed where we were ... who took refuge in the hills behind the hospital when the fires began to spread and close in, happened to escape alive. In short, those who survived the bomb were ... in a greater or lesser degree selfish, self-centered--guided by instinct and not by civilization. And we know it, we who have survived."
But is survival really selfish and uncivilized? Or is it smart? And is going in to rescue others always heroic? Or is it sometimes just stupid? It's a complex question, because there are so many factors involved, and every survival situation is different.
Self-preservation is supposedly an instinct. So one would think that in life-and-death situations, we'd all be very focused on whatever was necessary to survive. But that's not always true. In July 2007, I was having a drink with a friend in Grand Central Station when an underground steam pipe exploded just outside. From where we sat, we heard a dull "boom!" and then suddenly, people were running, streaming out of the tunnels and out the doors.
My friend and I walked quickly and calmly outside, but to get any further, we had to push our way through a crowd of people who were staring, transfixed, at the column of smoke rising from the front of the station. Some people were crying, others were screaming, others were on their cell phones...but the crowd, for the most part, was not doing the one thing that would increase everyone's chances of survival, if in fact a terrorist bomb with god knows what inside it had just gone off--namely, moving away from the area.
We may have an instinct for survival, but it clearly doesn't always kick in the way it should. A guy who provides survival training for pilots told me once that the number one determining factor for survival is simply whether people hold it together in a crisis or fall apart. And, he said, it's impossible to predict ahead of time who's going to hold it together, and who's going to fall apart.
So what is the responsibility of those who hold it together? I remember reading the account of one woman who was in an airliner that crashed on landing. People were frozen or screaming, but nobody was moving toward the emergency exits, even as smoke began to fill the cabin. After realizing that the people around her were too paralyzed to react, she took direct action, crawling over several rows of people to get to the exit. She got out of the plane and survived. Very few others in the plane, which was soon consumed by smoke and fire, did. And afterward, I remember she said she battled a lot of guilt for saving herself instead of trying to save the others.
Could she really have saved the others? Probably not, and certainly not from the back of the plane. Just like the Hiroshima survivors, if she'd tried, she probably would have perished with them. So why do survivors berate themselves for not adding to the loss by attempting the impossible? Perhaps it's because we get very mixed messages about survival ethics.
On the one hand, we're told to put our own oxygen masks on first, and not to jump in the water with a drowning victim. But then the people who ignore those edicts and survive to tell the tale are lauded as heroes. And people who do the "smart" thing are sometimes criticized quite heavily after the fact.
In a famous mountain-climbing accident chronicled in the book and documentary Touching the Void, climber Simon Yates was attempting to rope his already-injured friend Joe Simpson down a mountain in bad weather when the belay went awry. Simpson ended up hanging off a cliff, unable to climb up, and Yates, unable to lift him up and losing his own grip on the mountain, ended up cutting the rope to Simpson to save himself. Miraculously, Simpson survived the 100 foot fall and eventually made his way down the mountain. But Yates was criticized by some for his survival decision, even though the alternative would have almost certainly led to both of their deaths.
In Yates' case, he had time to think hard about the odds, and the possibilities he was facing, and to realize that he couldn't save anyone but himself. But what about people who have to make more instantaneous decisions? If, in fact, survivors are driven by "instinct not civilization," as the Hiroshima survivor put it, how do you explain all those who choose otherwise? Who would dive into icy waters or onto subway tracks or disobey orders to make repeat trips onto a minefield to bring wounded to safety? Are they more civilized than the rest of us? More brave? More noble?
It sounds nice, but oddly enough, most of the people who perform such impulsive rescues say that they didn't really think before acting. Which means they weren't "choosing" civilization over instinct. If survival is an instinct, it seems to me that there must be something equally instinctive that drives us, sometimes, to run into danger instead of away from it.
Perhaps it comes down to the ancient "fight or flight" impulse. Animals confronted with danger will choose to attack it, or run from it, and it's hard to say which one they'll choose, or when. Or maybe humans are such social herd animals, dependent on the herd for survival, that we feel a pull toward others even as we feel a contrary pull toward our own preservation, and the two impulses battle it out within us ... leading to the mixed messages we send each other on which impulse to follow.
Some people hold it together in a crisis and some people fall apart. Some people might run away from danger one day, and toward it the next. We pick up a thousand cues in an instant of crisis and respond in ways that even surprise ourselves, sometimes.
But while we laud those who sacrifice themselves in an attempt to save another, there is a fine line between brave and foolish. There can also be a fine line between smart and selfish. And as a friend who's served in the military for 27 years says, the truth is, sometimes there's no line at all between the two.
Narcissism, disagreeableness, grandiosity—a psychologist investigates how Trump’s extraordinary personality might shape his possible presidency.
In 2006, Donald Trump made plans to purchase the Menie Estate, near Aberdeen, Scotland, aiming to convert the dunes and grassland into a luxury golf resort. He and the estate’s owner, Tom Griffin, sat down to discuss the transaction at the Cock & Bull restaurant. Griffin recalls that Trump was a hard-nosed negotiator, reluctant to give in on even the tiniest details. But, as Michael D’Antonio writes in his recent biography of Trump, Never Enough, Griffin’s most vivid recollection of the evening pertains to the theatrics. It was as if the golden-haired guest sitting across the table were an actor playing a part on the London stage.
“It was Donald Trump playing Donald Trump,” Griffin observed. There was something unreal about it.
Beginning in July of this year, most everywhere we look, there will be a giant number on our food. The change will affect hundreds of thousands of edible products, and, so, hundreds of millions of people. It will affect the way we think about food for decades. (This update is the first in more than 20 years—so long ago that the FDA earnestly describes its current label design as “iconic.”)
Current nutrition labels, legally required on all packaged foods, are to be be replaced with the explicit purpose of improving people’s health. As Michelle Obama said at the unveiling of the new labels on Friday, “Very soon, you will no longer need a microscope, a calculator, or a degree in nutrition to figure out whether the food you’re buying is actually good for our kids.”
Petty political fights distract from the Vermont senator’s goal of a long-lasting movement.
Bernie Sanders’s beliefs have been obvious from the start. He thinks wealthy elites exert too much influence over American politics. He wants the U.S. government to lessen income inequality. He believes climate change is a pressing threat to the world. The clarity and overarching ambition of his agenda has been central to his appeal and expectations-defying political success so far.
If Sanders wants his political revolution to last, he will need to win widespread support for his ideas well into the future. Yet as the primary election draws to a close, the campaign has increasingly made arguments that may undercut the long-term viability of the movement that has coalesced around the Vermont senator.
Recent polls shown increasing support for the former governor, who’s hoping to win the Libertarian Party’s nomination this weekend.
If Gary Johnson wants to make it onto a primetime presidential-debate stage as the Libertarian Party’s nominee, he needs to qualify by polling above 15 percent. If he wants to be the nominee, he needs a strong showing at the party’s convention this weekend. And if he wants a strong showing at the convention, he needs to demonstrate to delegates that he’s their party’s ideal standard-bearer—a candidate who can be even a little competitive in a three-way matchup with Donald Trump and Hillary Clinton. Johnson just got good news: A poll released Tuesday morning shows the candidate with 10 percent of the national vote.
The Morning Consult survey puts Clinton at 38 percent, Trump at 35 percent, and Johnson, the two-term former New Mexico governor who also ran for president in 2012, trailing with 10 percent. For any other candidate, that low number would be a sign that the end is near. But not for Johnson, or other third-party candidates hoping to make it big in an election year when many voters will likely hold their noses as they cast their ballots. The 10-percent figure is close to a personal best for Johnson as a presidential candidate; poll analysts note that it is roughly twice as high as Johnson’s figures from the last cycle.
For centuries, philosophers and theologians have almost unanimously held that civilization as we know it depends on a widespread belief in free will—and that losing this belief could be calamitous. Our codes of ethics, for example, assume that we can freely choose between right and wrong. In the Christian tradition, this is known as “moral liberty”—the capacity to discern and pursue the good, instead of merely being compelled by appetites and desires. The great Enlightenment philosopher Immanuel Kant reaffirmed this link between freedom and goodness. If we are not free to choose, he argued, then it would make no sense to say we ought to choose the path of righteousness.
Today, the assumption of free will runs through every aspect of American politics, from welfare provision to criminal law. It permeates the popular culture and underpins the American dream—the belief that anyone can make something of themselves no matter what their start in life. As Barack Obama wrote in The Audacity of Hope, American “values are rooted in a basic optimism about life and a faith in free will.”
The author Moira Weigel argues that the various courtship rituals of the past hundred-odd years have reflected the labor-market conditions of their day.
Love, it turns out, has always been a lot of work.
While every generation will lament anew the fact that finding love is hard, history seems to indicate that this particular social ritual never gets any easier or less exciting. In Labor of Love, a new book documenting the history of dating in America, Moira Weigel, a Ph.D. candidate in comparative literature at Yale University, confirms this lament: Since dating was “invented,” it has always been an activity that required a lot of effort.
As part of her research, Weigel read dating-advice books from the 1800s and hundreds of articles on dating from teen and women’s magazines over the years, and she found two common themes: First, there is usually an older part of the population that perceives dating to be “dying,” or, at least, as not being done “appropriately.” Second, Weigel found that the way people date has almost always been tied to the market forces of their era.
It’s not easy fitting 1.2 million annual visitors onto an island of 330,000 residents.
Iceland may be beautiful, but it’s dangerously close to full. This is the message currently filtering out from the North Atlantic island as it struggles to absorb unprecedented numbers of visitors. Last year, the nation hosted 1.26 million tourists, a staggering number for a chilly island whose population barely scrapes past 330,000 citizens.
Those numbers are powered partly by a “Game of Thrones Effect” that has seen fans of the TV series flock to its shooting locations. The 2010 eruption of the Eyjafjallajökull volcano, which has since become a tourist attraction, also helped to push up its profile as a vacation spot—perversely so, given that the eruption initially led to 107,000 flights across Europe being canceled. Given the rocky waters the country has been sailing through since the 2008 financial crisis, the revenue brought in by this spike in tourism is no doubt welcome. But the sheer volume of visitors to what was until recent decades a remote part of the world is still causing major stress. So how can Iceland keep welcoming people while making sure it isn’t trampled underfoot?
When new countries rise to power, the transition can end badly, often in war. Harvard’s Graham Allison has argued in The Atlantic that “judging by the historical record, war is more likely than not” between the United States, the world’s current reigning superpower, and China, a rising military and economic force. There is considerable debate on this point, but American pundits and presidential candidates often talk as if China were already an American adversary; Donald Trump has warned, for example, that China will “take us down.” Yet few in the United States seem worried about Asia’s other rising giant, India.
To the contrary, there’s a temptation to support India, a like-minded democracy, as a counterweight against the growing power of authoritarian China. But if American leaders feel confident India can accumulate power without becoming an antagonist, can they find a way to make the same true for China?
A continuation of Valve’s acclaimed sci-fi series has been promised for 10 years, but seems no closer to fruition.
Ten years ago today, the video-game company Valve announced that Half-Life 2: Episode Three, the newest and much-anticipated chapter in its acclaimed sci-fi shooter series, would be out by the end of 2007. This was hardly surprising news: Valve had already released one episodic sequel to its smash hit Half-Life 2, and the second was due out soon. Still, news of Episode Three as “the last in a trilogy” was exciting to fans. Ten years later, they’re still waiting—and the new edition of Half-Life has gone from a eagerly awaited work to gaming history’s most famous piece of “vaporware”—a product announced to the public that the developer has no plans of actually making or releasing.
Since that announcement, Valve has released a dozen games, including the acclaimed Portal and Portal 2 and multiplayer smash hits like Left 4 Dead and Team Fortress 2. But Half-Life 2 sequels ended with Episode Two, and over the years, Valve’s party line on a new installment went from a firm commitment to vague promises to tight-lipped refusals to say anything at all. The longer things go on, the more impossible everyone’s expectations become—if a new Half-Life were ever released, the hype would be unimaginably hard to match, and yet Valve’s initial promise hasonly added to the franchise’s mystique.