Dwelling on our own suffering makes us blind to the pain of others
Family members of victims console each other as they gather to pay their respect at the reflecting pool at Ground Zero during the eighth anniversary commemoration ceremony / Reuters
On Sunday, New York will pause to remember and honor the victims who died in the attacks on the World Trade Center 10 years ago. Not as grandly as we did on the first anniversary of the attacks, of course. But that's as it should be. The wounds were fresh then, so the drama and emotion were both much higher. More than 3,000 people died in a single morning, and the images of people voluntarily jumping to their deaths is seared in our collective memory; a graphic reminder of just how horrific the attacks and their damage were. But the damage of 9/11 went beyond those actually killed. And the challenges facing the survivors are more complex.
Some of the people participating in the anniversary events in New York (and in others commemorating those killed on Flight 93 in Pennsylvania and at the Pentagon) will be literal survivors of those attacks. Others will be family members who were, by association, either emotional victims, or survivors, or both, depending on how you look at it. In truth, all Americans are peripheral survivors, in that we were all traumatized by the events of that day and had our lives impacted and changed by their fallout.
And yet, while all that is true, and the honoring and commemoration of our individual and collective loss is both legitimate and appropriate, we should still approach our identification with being victims or survivors with a healthy dose of caution.
At the end of June, I attended an unusual summit conference sponsored by Google Ideas, The Council on Foreign Relations, and the Tribeca Film Festival. Titled the "Summit Against Violent Extremism," it brought together some 200 people who had been involved in, or had been affected by, violent extremism of one kind or another, from Islamic jihadists to nationalist fighters, to gang members, to neo-Nazi skinheads, to Colombian jungle rebels.
The organizers separated the attendees into two groups: "survivors" and "formers" (formers being former extremists). All of the attendees were now working actively to combat violent extremism. And the stories of loss among the survivors were heart-rending. But their inclusion in the conference implied a bit of moral preaching to the "formers": we, the victims, plead with you, the perpetrators, to feel our pain. And one of the most striking moments of the conference, for me, came near the end, when one of the organizers asked a former Islamist fighter (now a soft-spoken Imam in a London mosque who works actively against violent extremism) if he'd ever had someone with a survivor's perspective speak at his mosque.
"I would like to make a couple of points," the Imam answered quietly. "First of all, I HAVE suffered. My little brother was killed, and I have lost 22 relatives in war. So," he said, gesturing to a survivor on the same panel, "I know about personal suffering in the same way as you have done."
That one, simple interchange conveyed two powerful and cautionary lessons about the hazards of victim and survivor-hood.
When tragedy or violence strikes us, we are victims of it. And if we survive it we are, by definition, survivors. I nearly died at the age of 20 when the car I was in was struck at high speed by an angry, drunk young man who'd just lost his job. The path back from that darkness, physically and emotionally, was painful and long. The good news is, humans are remarkably adaptable and resilient. You go on from tragedies. You just don't go on intact, or the same. And the self that you drag and pull forward from a tragedy feels (and sometimes is) so battered and imperfect that there can be great strength from acknowledging the injustice of what happened (I was a victim) and the difficulty of coming back from that (I am a survivor). It can help a battered soul heal.
But if those labels become part of our longer-term identities instead of just phases of healing, the focus on our own pain and suffering can blind us to the pain and suffering of others. The suffering of a mother whose innocent child was killed in the Twin Towers, while unique, is not more or less than the suffering of a mother whose innocent child was killed by a bullet or bomb, regardless of who fired it, dropped it or set it off, in Iraq, Pakistan or any other place in the world.
The interchange at the conference was also a cautionary reminder about the dark places where a sense of victimhood can lead. Many of the "formers" were also victims, and survivors, of injustice and violence of a different sort. But their righteous sense of their status as victims took them down a road where, at some point, any reaction became, therefore, justified.
Nahum Pachenik, one of the "formers" at the conference who described himself as "born into conflict" as the child of Israeli settlement pioneers near Hebron, even joked a bit about the victimhood rivalry between the Israelis and the Palestinians.
"The two sides have very similar thinking," he said. "[They say] 'We are the victim.' 'No, WE are the victim.'No. We are MORE the victim.'"
Victimhood is wonderfully appealing, Pachenik said, because "in the victim position, you don't have to admit anything, because all of the responsibility is on the other."
Nevertheless, Pachenik finally came to the conclusion that if he wanted to move away from the stalemate of violence around him, he had to give up the comfort of victimhood for the tougher and more challenging path of knowledge. He now runs an organization that strives to promote better knowledge and understanding between Palestinians and Israelis... starting with learning each other's language.
"Knowledge," Pachenik said, "is the opposite of the position of the victim. Today, I believe it is more important to promote education. It's important to learn the language of the other. Because if you do that, there is, maybe, a place to meet."
The victims of 9/11 who did not survive will always be victims, and should be honored and remembered as such. But even they wouldn't want to be remembered, or identified, solely by the label of "victim." As for the rest of us... well, we are survivors. But we are -- and need to be -- far more than that if we want to stop the cycle of violence that helps cause attacks like that in the first place. It's a tempering point worth remembering, even as we pause to honor the lives and memory of those who died.
The kerfuffle over Kim Kardashian's drug-promoting Instagram selfie is nothing new: As long as the agency has existed, it's had to figure out how to regulate drug advertisements in new forms of communication technology.
Last month, celebrity-news and health-policy bloggers had a rare moment of overlap after the Food and Drug Administration issued a warning letter to the pharmaceutical company Duchesnay, which manufactures Diclegis, a prescription-only anti-nausea pill. At stake: a single selfie with pill bottle.
The image that attracted the censure of the FDA was an Instagram posted on July 20 by Kim Kardashian. The image featured her upper torso, right hand, and face, with a bottle of Diclegis prominently displayed in her grasp. “OMG,” the caption began:
Have you heard about this? As you guys know my #morningsickness has been pretty bad. I tried changing things about my lifestyle and my diet, but nothing helped, so I talked to my doctor. He prescribed my Diclegis, I felt better, and most importantly it’s been studied and there is no increased risk to the baby.
American society increasingly mistakes intelligence for human worth.
As recently as the 1950s, possessing only middling intelligence was not likely to severely limit your life’s trajectory. IQ wasn’t a big factor in whom you married, where you lived, or what others thought of you. The qualifications for a good job, whether on an assembly line or behind a desk, mostly revolved around integrity, work ethic, and a knack for getting along—bosses didn’t routinely expect college degrees, much less ask to see SAT scores. As one account of the era put it, hiring decisions were “based on a candidate having a critical skill or two and on soft factors such as eagerness, appearance, family background, and physical characteristics.”
The 2010s, in contrast, are a terrible time to not be brainy. Those who consider themselves bright openly mock others for being less so. Even in this age of rampant concern over microaggressions and victimization, we maintain open season on the nonsmart. People who’d swerve off a cliff rather than use a pejorative for race, religion, physical appearance, or disability are all too happy to drop the s‑bomb: Indeed, degrading others for being “stupid” has become nearly automatic in all forms of disagreement.
It happened gradually—and until the U.S. figures out how to treat the problem, it will only get worse.
It’s 2020, four years from now. The campaign is under way to succeed the president, who is retiring after a single wretched term. Voters are angrier than ever—at politicians, at compromisers, at the establishment. Congress and the White House seem incapable of working together on anything, even when their interests align. With lawmaking at a standstill, the president’s use of executive orders and regulatory discretion has reached a level that Congress views as dictatorial—not that Congress can do anything about it, except file lawsuits that the divided Supreme Court, its three vacancies unfilled, has been unable to resolve.
On Capitol Hill, Speaker Paul Ryan resigned after proving unable to pass a budget, or much else. The House burned through two more speakers and one “acting” speaker, a job invented following four speakerless months. The Senate, meanwhile, is tied in knots by wannabe presidents and aspiring talk-show hosts, who use the chamber as a social-media platform to build their brands by obstructing—well, everything. The Defense Department is among hundreds of agencies that have not been reauthorized, the government has shut down three times, and, yes, it finally happened: The United States briefly defaulted on the national debt, precipitating a market collapse and an economic downturn. No one wanted that outcome, but no one was able to prevent it.
The June 23 vote represents a huge popular rebellion against a future in which British people feel increasingly crowded within—and even crowded out of—their own country.
I said goodnight to a gloomy party of Leave-minded Londoners a few minutes after midnight. The paper ballots were still being counted by hand. Only the British overseas territory of Gibraltar had reported final results. Yet the assumption of a Remain victory filled the room—and depressed my hosts. One important journalist had received a detailed briefing earlier that evening of the results of the government’s exit polling: 57 percent for Remain.
The polling industry will be one victim of the Brexit vote. A few days before the vote, I met with a pollster who had departed from the cheap and dirty methods of his peers to perform a much more costly survey for a major financial firm. His results showed a comfortable margin for Remain. Ten days later, anyone who heeded his expensive advice suffered the biggest percentage losses since the 2008 financial crisis.
How the Brexit vote activated some of the most politically destabilizing forces threatening the U.K.
Among the uncertainties unleashed by the Brexit referendum, which early Friday morning heralded the United Kingdom’s coming breakup with the European Union, was what happens to the “union” of the United Kingdom itself. Ahead of the vote, marquee campaign themes included, on the “leave” side, the question of the U.K.’s sovereignty within the European Union—specifically its ability to control migration—and, on the “remain” side, the economic benefits of belonging to the world’s largest trading bloc, as well as the potentially catastrophic consequences of withdrawing from it. Many of the key arguments on either side concerned the contours of the U.K.-EU relationship, and quite sensibly so. “Should the United Kingdom remain a member of the European Union or leave the European Union?” was, after all, the precise question people were voting on.
Patrick Griffin, his chief congressional affairs lobbyist, recalls the lead up to the bill’s passage in 1994—and the steep political price that followed.
For those who question whether anything will ever be done to curb the use of military grade weaponry for mass shootings in the United States, history provides some good news—and some bad. The good news is that there is, within the recent past, an example of a president—namely Bill Clinton—who successfully wielded the powers of the White House to institute a partial ban of assault weapons from the nation’s streets. The bad news, however, is that Clinton’s victory proved to be so costly to him and to his party that it stands as an enduring cautionary tale in Washington about the political dangers of taking on the issue of gun control.
In 1994, Clinton signed into law the Public Safety and Recreational Firearms Use Protection Act, placing restrictions on the number of military features a gun could have and banning large capacity magazines for consumer use. Given the potent dynamics of Second Amendment politics, it was a signal accomplishment. Yet the story behind the ban has been largely forgotten since it expired in 2004 and, in part, because the provision was embedded in the larger crime bill.
Thoughts on the first episode of ESPN’s five-part documentary
Every fall Sunday, when I was a kid, half an hour before the pre-game shows and an hour before the games themselves, I would tune into the latest offering from NFL Films. This was the pre-pre-game show—an assembly of short films derived from the massive archive of professional football. Steve Sabol, whose father founded NFL Films, would preside. He’d offer and then throw it to Jon Facenda or Jefferson Kaye, who would narrate the career highlights of players likeGale Sayers, Earl Campbell, or Dick “Night Train” Lane.
“Highlights” understates what NFL films was actually doing. The shorts were drawn from some the most beautifully shot footage in all of sports. It wasn’t unheard of for NFL Films to go high concept—this piece on football and ballet, with cameos from Allen Ginsberg and George Will, may be the definitive example. Great football plays would be injected not with the normal hurrahs, but with poetry. When Facenda, for instance, wanted to introduce a spectacular touchdown run by Marcus Allen, he did so in the omniscient third person: “On came Marcus Allen—running with the night.”
In the early 19th century, a series of massive quakes rocked Missouri. Some experts predict that the state could be in for another round of violent shaking, while others warn that a big quake could strike elsewhere in the center of the continent.
As I drove across the I-40 bridge into Memphis, I was reassured: chances were slim that a massive earthquake would wrest the road from its supports, and plunge me more than a hundred feet into the murky Mississippi. Thanks to a recently completed $260 million seismic retrofit, the bridge—a chokepoint for traffic in the central U.S.—is now fortified. It’s also decked out with strong-motion accelerometers and bookended by borehole seismometers to record convulsions in the earth.
The bridge passes a glass colossus, the Memphis Pyramid. Originally built as a nod to the city’s Old Kingdom namesake, the pyramid now enshrines a Bass Pro Shops megastore. The city recently spent $25 million to prevent the pyramid from being swallowed, perhaps by Geb, the ancient Egyptian god of earthquakes. Further downtown, AutoZone’s corporate headquarters also stands ready for a tectonic throttling, propped up as it is on top of giant shock absorbers, while, the nearby Memphis VA is similarly inured to temblors after the city spent $64 million dollars removing nine floors of the hospital to reduce the risk of collapse in a catastrophic earthquake.
A hotly contested, supposedly ancient manuscript suggests Christ was married. But believing its origin story—a real-life Da Vinci Code, involving a Harvard professor, a onetime Florida pornographer, and an escape from East Germany—requires a big leap of faith.
On a humid afternoon this past November, I pulled off Interstate 75 into a stretch of Florida pine forest tangled with runaway vines. My GPS was homing in on the house of a man I thought might hold the master key to one of the strangest scholarly mysteries in recent decades: a 1,300-year-old scrap of papyrus that bore the phrase “Jesus said to them, My wife.” The fragment, written in the ancient language of Coptic, had set off shock waves when an eminent Harvard historian of early Christianity, Karen L. King, presented it in September 2012 at a conference in Rome.
Never before had an ancient manuscript alluded to Jesus’s being married. The papyrus’s lines were incomplete, but they seemed to describe a dialogue between Jesus and the apostles over whether his “wife”—possibly Mary Magdalene—was “worthy” of discipleship. Its main point, King argued, was that “women who are wives and mothers can be Jesus’s disciples.” She thought the passage likely figured into ancient debates over whether “marriage or celibacy [was] the ideal mode of Christian life” and, ultimately, whether a person could be both sexual and holy.