Two weeks ago I read the NYT Mag's back-page story on a harrowing brush-with-death encounter when pilots had to land an airliner while thinking that its wheels had not come down. I was about to head off on a trip so I didn't take the time to write what I was thinking, which was: this doesn't sound right.
Now I see that questions about the veracity of the story have cropped up -- in Romenesko and Metafilter, in a Gene Weingarten's item for the Washington Post, and elsewhere. For the record, here are some things that seemed fishy to me.
1) The whole scenario. The plot line of the essay is that the pilots discovered, on a trip from some unnamed city to Denver, that the plane's landing gear didn't work. Thus they "circled for two hours over Philadelphia" to burn off gas before attempting a wheels-up landing.
Here's the problem: why would the pilots have discovered mid-flight that the landing gear had failed? Normally pilots would be paying attention to their landing gear exactly twice during the flight. One would be a few seconds after takeoff, when the flight crew would retract the gear into the plane's body so as to reduce drag as they climbed. If the wheels didn't retract then, the crew would know that right away -- and they could circle back (perhaps after burning off some fuel) for a normal wheels-and-all landing.
The other time is not long before landing, when the crew would put the wheels back down. If the wheels didn't go down, that would be a problem -- with various possible counter-measures. (Manual gear-lowering systems; flying by the tower so controllers can look at the plane's belly with binoculars and see whether the gear are actually down; and so on.)
The rest of the time, the wheels just sit there. They don't fail mid-flight. They're just in their bay inside the plane's fuselage. The pilots pay zero attention to the landing gear until they're going through the descent-and-landing checklists. So, maybe this happened. But it doesn't resemble any "failure mode" I have ever heard of. Unless the gear didn't retract after takeoff to begin with, and the pilots circled but didn't say anything to the passengers (who also didn't notice anything) for the next two hours.
2) The pilots' Airplane! style behavior. According to this story, the pilots are opening the cockpit door and yelling encouragement and safety instructions to the terrified passengers, because they've turned off the cabin electric system (to avoid sparks on landing) and therefore can't use the public-address system.
Really? Try to envision the scenario of the pilot yelling down the aisle, as "his cap dangled in one hand." I can't. Including the part about the cap, which pilots don't wear while sitting at the controls.
3) The mood of impending doom. The whole emotional tone of the essay turns on the pilots' preparing everyone for a brush with death. It's easy for me to believe that some passengers might be terrified. Not the pilots. Gear-up landings are bad for the airplane -- the belly of the plane obviously gets chewed up. But they are more common than other airline mishaps -- one happened just last week at Newark -- and they rarely kill people. The pilots would know that.
4) The engines "spooling down." This passage caught my eye when I first saw the piece: "You can actually feel the air holding you up when a plane's engines power down. Like when you're riding a bike downhill and you stop pedaling, there's noiselessness in its speed."
Well, yes. The air holds you up the entire time the plane is flying. But let's concentrate on the engine. The author never explicitly says that the engines were turned off, but several times he talks about the "noiselessness" as they "power down." To which I say again, Really?
Any plane reduces power as it descends for a landing. An airliner would need to slow down from its 400+ knot cruising speed to the low-100-kt range for final approach -- and do so even as it is descending, which speeds the plane up. Pilots manage that transition through reduced power. But for a wheels-up landing the pilots might maintain more power than usual just before touchdown, not less, so as to make the final contact with the ground as gentle and gradual as possible.
5) The Philadelphia disaster team. According to the story, the plane circled over Philadelphia because its airport had the best disaster-response team. Reportedly the author heard this judgment from another passenger who worked for FEMA rather than directly from the pilot. Still, it sounds odd.
All big airports have on-scene fire squads and equipment to spread fire-retardant foam over the runway to protect an inbound wheels-up plane. It is 100% believable that pilots of a such a plane would be looking for a nearby airport that had the longest runways, or the ones best aligned with the wind. Choosing this airport on the basis of EMT teams sounds strange. No offense to Philly, but what would be wrong with Boston -- site of the miracle trauma-treatment scenes after the Marathon bombing? If the plane needed to burn fuel for two hours, it could easily have gotten there.
6) When did this happen, anyway? Practically the only specific reference in the story was to Philadelphia. Otherwise there is no mention of: which airline this was, or when it occurred, or on what kind of plane, or where the trip began. Any of these, of course, would make the story easier to verify.
So, maybe this all happened. I know, from experience, that the NYT Magazine has good fact checkers. But a lot of details sound very unlikely to me.
James Fallows is a national correspondent for The Atlantic and has written for the magazine since the late 1970s. He has reported extensively from outside the United States and once worked as President Carter's chief speechwriter. His latest book is China Airborne.
Demonizing processed food may be dooming many to obesity and disease. Could embracing the drive-thru make us all healthier?
Late last year, in a small health-food eatery called Cafe Sprouts in Oberlin, Ohio, I had what may well have been the most wholesome beverage of my life. The friendly server patiently guided me to an apple-blueberry-kale-carrot smoothie-juice combination, which she spent the next several minutes preparing, mostly by shepherding farm-fresh produce into machinery. The result was tasty, but at 300 calories (by my rough calculation) in a 16-ounce cup, it was more than my diet could regularly absorb without consequences, nor was I about to make a habit of $9 shakes, healthy or not.
Inspired by the experience nonetheless, I tried again two months later at L.A.’s Real Food Daily, a popular vegan restaurant near Hollywood. I was initially wary of a low-calorie juice made almost entirely from green vegetables, but the server assured me it was a popular treat. I like to brag that I can eat anything, and I scarf down all sorts of raw vegetables like candy, but I could stomach only about a third of this oddly foamy, bitter concoction. It smelled like lawn clippings and tasted like liquid celery. It goes for $7.95, and I waited 10 minutes for it.
In the name of emotional well-being, college students are increasingly demanding protection from words and ideas they don’t like. Here’s why that’s disastrous for education—and mental health.
Something strange is happening at America’s colleges and universities. A movement is arising, undirected and driven largely by students, to scrub campuses clean of words, ideas, and subjects that might cause discomfort or give offense. Last December, Jeannie Suk wrote in an online article for The New Yorker about law students asking her fellow professors at Harvard not to teach rape law—or, in one case, even use the word violate (as in “that violates the law”) lest it cause students distress. In February, Laura Kipnis, a professor at Northwestern University, wrote an essay in The Chronicle of Higher Education describing a new campus politics of sexual paranoia—and was then subjected to a long investigation after students who were offended by the article and by a tweet she’d sent filed Title IX complaints against her. In June, a professor protecting himself with a pseudonym wrote an essay for Vox describing how gingerly he now has to teach. “I’m a Liberal Professor, and My Liberal Students Terrify Me,” the headline said. A number of popular comedians, including Chris Rock, have stopped performing on college campuses (see Caitlin Flanagan’s article in this month’s issue). Jerry Seinfeld and Bill Maher have publicly condemned the oversensitivity of college students, saying too many of them can’t take a joke.
Some Republican candidates are promoting a policy change that would hurt workers by disguising it with a pleasant-sounding phrase.
Americans like their Social Security benefits quite a bit: They oppose cuts to them by a margin of two to one. Even Millennials, who won’t be seeing benefits anytime soon, feel protective of Social Security, according to a poll from the Pew Research Center.
One way to effectively cut Social Security benefits is to raise the age at which they kick in. And yet, when asked specifically about raising the retirement age, Americans are mixed.
Perhaps confusion arises because “raising the age of retirement” sounds like a nice jobs program for older Americans, or an end to forced retirement. I sympathize with that position: Anyone who wants to retire later and work into old age should have a job. But that’s not what raising the retirement age would entail—the fact is, raising the Social Security retirement age represents a reduction in benefits: Because the monthly payments a person receives grow bigger the later in life he or she retires, raising the age cutoff reduces the total amount of money paid out.
In continuing to tinker with the universe she built eight years after it ended, J.K. Rowling might be falling into the same trap as Star Wars’s George Lucas.
September 1st, 2015 marked a curious footnote in Harry Potter marginalia: According to the series’s elaborate timeline, rarely referenced in the books themselves, it was the day James S. Potter, Harry’s eldest son, started school at Hogwarts. It’s not an event directly written about in the books, nor one of particular importance, but their creator, J.K. Rowling, dutifully took to Twitter to announce what amounts to footnote details: that James was sorted into House Gryffindor, just like his father, to the disappointment of Teddy Lupin, Harry’s godson, apparently a Hufflepuff.
It’s not earth-shattering information that Harry’s kid would end up in the same house his father was in, and the Harry Potter series’s insistence on sorting all of its characters into four broad personality quadrants largely based on their family names has always struggled to stand up to scrutiny. Still, Rowling’s tweet prompted much garment-rending among the books’ devoted fans. Can a tweet really amount to a piece of canonical information for a book? There isn’t much harm in Rowling providing these little embellishments years after her books were published, but even idle tinkering can be a dangerous path to take, with the obvious example being the insistent tweaks wrought by George Lucas on his Star Wars series.
Heather Armstrong’s Dooce once drew millions of readers. Her blog’s semi-retirement speaks to the challenges of earning money as an individual blogger today.
The success story of Dooce.com was once blogger lore, told and re-told in playgroups and Meetups—anywhere hyper-verbal people with Wordpress accounts gathered. “It happened for that Dooce lady,” they would say. “It could happen for your blog, too.”
Dooce has its origin in the late 1990s, when a young lapsed Mormon named Heather Armstrong taught herself HTML code and moved to Los Angeles. She got a job in web design and began blogging about her life on her personal site, Dooce.com.
The site’s name evolved out of her friends’ AOL Instant-Messenger slang for dude, or its more incredulous cousin, "doooood!” About a year later, Armstrong was fired for writing about her co-workers on the site—an experience that, for a good portion of the ‘aughts, came known as “getting dooced.” She eloped with her now ex-husband, Jon, moved to Salt Lake City, and eventually started blogging full time again.
Encouraging a focus on white identity is a dangerous approach for a country in which white supremacy has been a toxic force.
Donald Trump and the disaffected white people who make up his base of support have got me thinking about race in America. “Trump presents a choice for the Republican Party about which path to follow––” Ben Domenech writes in an insightful piece at The Federalist, “a path toward a coalition that is broad, classically liberal, and consistent with the party’s history, or a path toward a coalition that is reduced to the narrow interests of identity politics for white people.”
When I was growing up in Republican Orange County during the Reagan and Bush Administrations, lots of white parents sat their kids in front of The Cosby Show, explained that black people are just like white people, and inveighed against judging anyone by the color of their skin rather than the content of their character. The approach didn’t convey the full reality of race as minorities experience it. But it represented a significant generational improvement in race relations.
After a lackluster summer, the famous neurosurgeon is finally surging—but his reliance on the conservative grassroots might be a burden as much as a boon.
The Ben Carson surge that everyone was waiting for is finally here.
The conservative neurosurgeon has been a source of fascination for both the Republican grassroots and the media ever since he critiqued President Obama, who was seated only a few feet away, at the National Prayer Breakfast in 2013. He’s been a steady, if middling, presence in GOP primary polls for most of the year—always earning at least 5 percent, but rarely more than 10. Yet over the last two weeks, Carson has secured a second-place spot after Donald Trump, both nationally and in the crucial opening battleground of Iowa, where he is a favorite of the state’s sizable evangelical community. A Monmouth University poll released this week even showed him tied with Trump for the lead in Iowa, at 23 percent.
The Islamic State is no mere collection of psychopaths. It is a religious group with carefully considered beliefs, among them that it is a key agent of the coming apocalypse. Here’s what that means for its strategy—and for how to stop it.
What is the Islamic State?
Where did it come from, and what are its intentions? The simplicity of these questions can be deceiving, and few Western leaders seem to know the answers. In December, The New York Times published confidential comments by Major General Michael K. Nagata, the Special Operations commander for the United States in the Middle East, admitting that he had hardly begun figuring out the Islamic State’s appeal. “We have not defeated the idea,” he said. “We do not even understand the idea.” In the past year, President Obama has referred to the Islamic State, variously, as “not Islamic” and as al-Qaeda’s “jayvee team,” statements that reflected confusion about the group, and may have contributed to significant strategic errors.
Burning Man is underway in the Nevada desert, the migrant crisis grew in both scale and impact, new Star Wars toys went on sale worldwide, China marked the 70thanniversary of the end of World War II, Alaska’s Mt. McKinley was renamed Denali, and much more.
Burning Man is underway in the Nevada desert, the migrant crisis grew in both scale and impact, new Star Wars toys went on sale worldwide, China marked the 70th anniversary of the end of World War II, Alaska’s Mt. McKinley was renamed Denali, a Kentucky county clerk was jailed for contempt of court while refusing to issue same-sex marriage licenses, and much more.
According to Franklin, what mattered in business was humility, restraint, and discipline. But today’s Type-A MBAs would find him qualified for little more than a career in middle management.
When he retired from the printing business at the age of 42, Benjamin Franklin set his sights on becoming what he called a “Man of Leisure.” To modern ears, that title might suggest Franklin aimed to spend his autumn years sleeping in or stopping by the tavern, but to colonial contemporaries, it would have intimated aristocratic pretension. A “Man of Leisure” was typically a member of the landed elite, someone who spent his days fox hunting and affecting boredom. He didn’t have to work for a living, and, frankly, he wouldn’t dream of doing so.
Having worked as a successful shopkeeper with a keen eye for investments, Franklin had earned his leisure, but rather than cultivate the fine arts of indolence, retirement, he said, was “time for doing something useful.” Hence, the many activities of Franklin’s retirement: scientist, statesman, and sage, as well as one-man civic society for the city of Philadelphia. His post-employment accomplishments earned him the sobriquet of “The First American” in his own lifetime, and yet, for succeeding generations, the endeavor that was considered his most “useful” was the working life he left behind when he embarked on a life of leisure.