My faith in adult society got a little boost the past weekend when I read that a growing number of people are becoming disillusioned with Facebook and are discontinuing their affiliation, or at least their frequent visits, to the site.
Not that Facebook, or its conceptual offspring Twitter, are in any immediate danger of extinction. The numbers of both networks are still climbing. But as Virginia Heffernan reported in the Sunday Times Magazine, there's a growing number of people who are becoming disenchanted with Facebook--and in some cases the whole idea of the Facebook--for a number of reasons.
For some, it's concerns about privacy. Facebook isn't just a friendly neighborhood park; the company profits from the information it collects on users. There were many who objected, in early 2008, to the fact that the site was holding onto profile information even when people closed down their accounts. Not to mention the "oops" when Facebook decided to let everyone in a user's circle know about other internet purchases a user made. There were also some who turned away after the kerfluffle over Facebook's assertion, last February, that it owned the copyright to all content on the site, and some who object to having their personal activity so closely monitored by some large, unseen entity.
But what intrigued me about the group Heffernan interviewed was the number who were simply tiring of checking in on other people's lives all the time, investing in connections that felt more like stalking or distant newsletters instead of direct one-on-one friendship, and a growing unease about how they're spending, or wasting, their time.
I find these growing sentiments reassuring because of an assessment a friend of mine made last spring about the social-network frenzy of Facebook and Twitter. A friend, it should be noted, whose entire job revolves around the development of new technology in Silicon Valley. But both of those technologies, he said, were really geared toward the needs and interests of teenagers and young people. Twitter, after all, evolved from cell phone texting, which nobody does anywhere near as impressively, or frequently, as the under-20 crowd. And Facebook was started by college students as a kind of snide "pig book" to put various students' photos together and allow people to weigh in on who was "hotter." It evolved into a college networking site, and expanded from there. But, still.
The tasks that Facebook and Twitter enhance ... staying connected with as large a group as possible, staying up-to-the-minute informed about what everyone in the social world you care about is doing, and in the process keeping track of where you fit in the social hierarchy of it all ... have been a primary focus of teenagers since time immemorial. Forty years ago, there were gossip cliques by the school lockers and fights over who got to use the family phone to keep up with the latest social status news. All Facebook and Twitter do is give teenagers additional tools to accomplish one of their prime developmental tasks: figuring out how to define themselves in relation to, and as distinct from, the rest of their peers, and exploring a wide variety of social connections within that group.
So in that context, texting, Facebook and Twitter are all terrific developments that, among other things, certainly free up the family phone. The puzzling thing is why they've been so popular among people who are supposed to be a bit beyond that stage. At some point in our development, we're supposed to let go of that obsessive focus on what everyone else is doing in order to focus on our own work and achievements. We're supposed to mature into valuing fewer but more meaningful friendships over the herd social groups we favored as teenagers. And hopefully, we're supposed to get busy enough with more significant contributions to family, community and the world to either care about, or have time for, the movements and chatter of people we're not that deeply connected to. As free time becomes more limited, choices have to be made. And there's a trade-off: to go deep, you can't go as broad.
There are certainly valuable uses for Facebook, even in the 30-something and beyond set. Most of my friends who have teenagers have joined so they have a better awareness of the technology and world their children are experiencing ... and to help them keep track of what's going on in their children's lives. And for older people who can't get out as much, social networking sites offer a way to stay connected with the world, and to keep loneliness at bay. Not to mention their appeal to marketers, who see a way to reach large groups of people (and especially the all-important young demographic) with a sales message in a fairly easy manner.
So the sites have their uses. But using them to compensate for the loneliness of old age, track your kids, or sell a product, is different than being giddy about them--or being addicted to them--for their own sake. And that's the part that's perplexed me about their growing use and popularity among the over-30 set. When teenagers are texting or twittering inane comments during class, they're being difficult, but age-appropriate. When Senators are twittering inane comments during major policy speeches, there's something slightly askew.
But perhaps the fascination with both sites is just a product of our innately curious and exploratory natures. When my sister and I, at ages 15 and 17, bought lacrosse sticks (boys', because we couldn't locate girls'), I remember the way my dad was drawn almost irresistibly toward the back yard where we were trying them out. He watched from the back window, then the open door, then the grass at the foot of the steps. We could feel how much he was itching to have a go at it, even though he'd never held a lacrosse stick in his life. When we finally offered him a turn, he lit up like a Christmas tree and laughed out loud at the novelty of the play. He had a blast with it. But he didn't have the need to play as long as my sister and I did. He tried it, had fun, and then moved on to the other tasks and activities of his day.
The kids come up with something new, and we can't help but want to try it out. But with different life and developmental tasks demanding our focus and time, we don't, or at least we shouldn't, stay as obsessed with it as they are--whether the "it" is the hula hoop, skateboarding, hanging out at the mall ... or a passionate attachment to Facebook or Twitter.
Is that natural dissipation of interest coming to pass with the social networking sites, as well? Hard to say. But if Heffernan's subjects are any guide, it may be ... until, of course, the next exciting new fad, fashion, techno-gizmo, or toy comes to town.
What would the American culture wars look like if they were less about “values” and more about Jesus?
Evangelical Christianity has long had a stranglehold on how Americans imagine public faith. Vague invocations of “religion”—whether it’s “religion vs. science” or “religious freedom”—usually really mean “conservative, Protestant, evangelical Christianity,” and this assumption inevitably frames debates about American belief. For the other three-quarters of the population—Catholics, Jews, other Protestants, Muslims, Hindus, secular Americans, Buddhists, Wiccans, etc.—this can be infuriating. For some evangelicals, it’s a sign of success, a linguistic triumph of the culture wars.
But not for Russell Moore. In 2013, the 43-year-old theologian became the head of the Ethics and Religious Liberty Commission, the political nerve center of the Southern Baptist Convention. His predecessor, Richard Land, prayed with George W. Bush, played hardball with Democrats, and helped make evangelicals a quintessentially Republican voting bloc.
Many psychiatrists believe that a new approach to diagnosing and treating depression—linking individual symptoms to their underlying mechanisms—is needed for research to move forward.
In his Aphorisms, Hippocrates defined melancholia, an early understanding of depression, as a state of “fears and despondencies, if they last a long time.” It was caused, he believed, by an excess of bile in the body (the word “melancholia” is ancient Greek for “black bile”).
Ever since then, doctors have struggled to create a more precise and accurate definition of the illness that still isn’t well understood. In the 1920s, the German psychiatrist Kurt Schneider argued that depression could be divided into two separate conditions, each requiring a different form of treatment: depression that resulted from changes in mood, which he called “endogenous depression,” and depression resulting from reactions to outside events, or “reactive depression.” His theory was challenged in 1926, when the British psychologist Edward Mapother argued in the British Medical Journal that there was no evidence for two distinct types of depression, and that the apparent differences between depression patients were just differences in the severity of the condition.
The winners of the 27th annual National Geographic Traveler Photo Contest have just been announced.
The winners of the 27th annual National Geographic Traveler Photo Contest have just been announced. Winning first prize, Anuar Patjane Floriuk of Tehuacán, Mexico, will receive an eight-day photo expedition for two to Costa Rica and the Panama Canal for a photograph of divers swimming near a humpback whale off the western coast of Mexico. Here, National Geographic has shared all of this year’s winners, gathered from four categories: Travel Portraits, Outdoor Scenes, Sense of Place, and Spontaneous Moments. Captions by the photographers.
Exceptional nonfiction stories from 2014 that are still worth encountering today
Each year, I keep a running list of exceptional nonfiction that I encounter as I publish The Best ofJournalism, an email newsletter that I send out once or twice a week. This is my annual attempt to bring some of those stories to a wider audience. I could not read or note every worthy article that was published last calendar year and I haven't included any paywalled articles or anything published at The Atlantic. But everything that follows is worthy of wider attention and engagement.
Paul faced danger, Ani and Ray faced each other, and Frank faced some career decisions.
This is what happens when you devote two-thirds of a season to scene after scene after scene of Frank and Jordan’s Baby Problems, and Frank Shaking Guys Down, and Look How Fucked Up Ray and Ani Are, and Melancholy Singer in the Dive Bar Yet Again—and then you suddenly realize that with only a couple episodes left you haven’t offered even a rudimentary outline of the central plot.
What if Joe Biden is going to run for the Democratic nomination after all?
Most Democrats seem ready for Hillary Clinton—or at least appear content with her candidacy. But what about the ones who who were bidin’ for Biden? There are new signs the vice president might consider running for president after all.
Biden has given little indication he was exploring a run: There’s no super PAC, no cultivation of a network of fundraisers or grassroots organizers, few visits to early-primary states. While his boss hasn’t endorsed Clinton—and says he won’t endorse in the primary—many members of the Obama administration have gone to work for Clinton, including some close to Biden.
But Biden also hasn’t given any clear indication that he isn’t running, and a column by Maureen Dowd in Saturday’s New York Times has set off new speculation. One reason Biden didn’t get into the race was that his son Beau was dying of cancer, and the vice president was focused on being with his son. But before he died in May, Dowd reported, Beau Biden tried to get his father to promise to run. Now Joe Biden is considering the idea.
The jobs that are least vulnerable to automation tend to be held by women.
Many economists and technologists believe the world is on the brink of a new industrial revolution, in which advances in the field of artificial intelligence will obsolete human labor at an unforgiving pace. Two Oxford researchers recently analyzed the skills required for more than 700 different occupations to determine how many of them would be susceptible to automation in the near future, and the news was not good: They concluded that machines are likely to take over 47 percent of today’s jobs within a few decades.
This is a dire prediction, but one whose consequences will not fall upon society evenly. A close look at the data reveals a surprising pattern: The jobs performed primarily by women are relatively safe, while those typically performed by men are at risk.
An activist group is trying to discredit Planned Parenthood with covertly recorded videos even as contraception advocates are touting a method that sharply reduces unwanted pregnancies.
Abortion is back at the fore of U.S. politics due to an activist group’s attempt to discredit Planned Parenthood, one of the most polarizing organizations in the country. Supporters laud its substantial efforts to provide healthcare for women and children. For critics, nothing that the organization does excuses its role in performing millions of abortions––a procedure that they regard as literal murder––and its monstrous character is only confirmed, in their view, by covertly recorded video footage of staffers cavalierly discussing what to do with fetal body parts.
If nothing else, that recently released footage has galvanized Americans who oppose abortion, media outlets that share their views, and politicians who seek their votes. “Defunding Planned Parenthood is now a centerpiece of the Republican agenda going into the summer congressional recess,” TheWashington Postreports, “and some hard-liners have said they are willing to force a government shutdown in October if federal support to the group is not curtailed.”
It’s impossible to “solve” the Iranian nuclear threat. This agreement is the next best thing.
Having carefully reviewed the lengthy and complex agreement negotiated by the United States and its international partners with Iran, I have reached the following conclusion: If I were a member of Congress, I would vote yes on the deal. Here are nine reasons why.
1) No one has identified a better feasible alternative. Before negotiations halted its nuclear advance, Iran had marched relentlessly down the field from 10 years away from a bomb to two months from that goal line. In response, the United States and its partners imposed a series of sanctions that have had a significant impact on Iran’s economy, driving it to negotiate. That strategy worked, and resulted in a deal. In the absence of this agreement, the most likely outcome would be that the parties resume doing what they were doing before the freeze began: Iran installing more centrifuges, accumulating a larger stockpile of bomb-usable material, shrinking the time required to build a bomb; the U.S. resuming an effort to impose more severe sanctions on Iran. Alternatively, Israel or the United States could conduct military strikes on Iran’s nuclear facilities, setting back the Iranian program by two years, or perhaps even three. But that option risks wider war in the Middle East, an Iran even more determined to acquire a bomb, and the collapse of consensus among American allies.
Blame Prohibitionists, German immigrants, and factory workers who just wanted to drink during their lunch break.
Today’s discerning beer drinkers might be convinced that America’s watery, bland lagers are a recent corporate invention. But the existence of American beers that are, as one industry executive once put it, “less challenging,” has a much longer history. In fact, Thomas Jefferson, himself an accomplished homebrewer, complained that some of his country’s beers were “meagre and often vapid” nearly 200 years ago.
Jefferson never lived to see the worst of it. Starting in about the mid-1800s, American beer has been defined by its dullness. Why? The answer lies in a combination of religious objections to alcohol, hordes of German immigrants, and a bunch of miners who just wanted to drink during their lunch break, says Ranjit Dighe, a professor of economics at the State University of New York at Oswego.