As the unemployment rate recovers faster than job
creation does, there's been much consternation about the quality of the
job market improvement. Yes, the unemployment rate has fallen to 7.8%,
but how do we account for the following chart? As it shows, since the
end of 2008 the labor force participation rate has fallen from
65.8% to 63.6%.
Aggregates can be misleading. For
instance, that surge in the participation rate from the 1960's to 1980's
is a result of women joining the workforce.
The male rate, on the other hand, has been declining since the 1950's.
Male participation has fallen under President
Obama. It fell under President George W. Bush. And President Clinton.
It's fallen in every presidential administration going back to at least
Eisenhower's, with the exception of Carter's, for whom it was flat.
Why are fewer men choosing to work? For that, we turn to the Census Bureau's 2012 Statistical Abstract.
The participation rate is lower for single men than for married men, and marriage
rates in the US have been falling for decades, so we'd expect a modest
decline from that. Looking by age bucket, it's been pretty steady for
single and married men for everyone over the age of 25 since the start
of the Great Recession.
The recent decline we've seen has been primarily
among young, single men. For single men age 16-19, participation fell by
almost 9 points from 2006-2010. For single men age 20-24 it fell by
almost 5 points. This could be for a variety of factors, from men
deciding it's not worth bothering to apply for a job at the local
grocery store, to men more focused on their education with unskilled
work harder to find, to those living at home who decide there's no need
for spending money when so much entertainment is free online.
Additionally, the acceleration in the labor force
decline began when the oldest baby boomers began turning 60. Yes,
because of deflated housing prices and retirement accounts, boomers will
work longer than they thought. But 60-year olds still work less than
30-year olds, and that demographic shift is being reflected in the data.
What's more, this decline in the workforce is part of a century-long trend towards working less in the United States. Child labor laws were passed during the Great Depression, restricting child labor. During the Truman administration, the US government instituted the 40-hour work week for federal employees. The passage of Social Security and Medicare reduced incentives for seniors to work as well.
This is a good thing. Among his many writings, John Maynard Keynes talked about an eventual 15-hour work week to satisfy the material needs of citizens. We're progressing slower than he thought, but we're getting there.
But can fewer working young adults possibly be a good thing? It's intuitive that fewer workers means less work and a smaller and weaker economy. But since the decline is mostly among very young men (and, to a lesser extent, young women) we need to understand why they're dropping out.Student loan debt outstanding has grown from $360 billion to $900 billionover the past seven years. The size of this debt is daunting, but it shows that some of the labor force decline is due to young people investing more in their education, an eventual long-term positive.
And those not dropping out for education-related reasons? If it's just a bunch of 17-year olds who are content spending their time on Facebook instead of earning a few bucks bagging groceries, that's one thing. But if it's people who feel shut out of the workforce, that's something policymakers should address.
These are issues we're going to have to grapple with, because with robotic labor on the horizon, our desire and ability to compete with emerging market and silicon-based labor, especially for less-educated Americans, is likely to continue to fall.
The comedian's n-bomb at the White House Correspondents’ Dinner highlights a generational shift in black culture.
Georgia McDowell was born the daughter of farmers and teachers in North Carolina in 1902. She was my great-grandmother, and she taught me to read, despite the dementia that clouded her mind and the dyslexia that interrupted mine. I loved Miss Georgia, though she kept as many hard lines in her home as she had in her classrooms. One of the hardest lines was common to many black households: The word “nigger” and all of its derivatives were strict taboos in person, on television, and on radio from any source, black or otherwise, so long as she lived and breathed. She’d kept the taboo through decades of teaching black students and raising black children. For most of my childhood, the taboo was absolute.
When Apple announced in 2013 that its next iPhone would include a fingerprint reader, it touted the feature as a leap forward in security. Many people don’t set up a passcode on their phones, Apple SVP Phil Schiller said at the keynote event where the Touch ID sensor was unveiled, but making security easier and faster might convince more users to protect their phones. (Of course, Apple wasn’t the first to stuff a fingerprint reader into a flagship smartphone, but the iPhone’s Touch ID took the feature mainstream.)
The system itself proved quite secure—scanned fingerprints are stored, encrypted, and processed locally rather than being sent to Apple for verification—but the widespread use of fingerprint data to unlock iPhones worried some experts. One of the biggest questions that hung over the transition was legal rather than technical: How might a fingerprint-secured iPhone be treated in a court of law?
It’s a paradox: Shouldn’t the most accomplished be well equipped to make choices that maximize life satisfaction?
There are three things, once one’s basic needs are satisfied, that academic literature points to as the ingredients for happiness: having meaningful social relationships, being good at whatever it is one spends one’s days doing, and having the freedom to make life decisions independently.
But research into happiness has also yielded something a little less obvious: Being better educated, richer, or more accomplished doesn’t do much to predict whether someone will be happy. In fact, it might mean someone is less likely to be satisfied with life.
That second finding is the puzzle that Raj Raghunathan, a professor of marketing at The University of Texas at Austin’s McCombs School of Business, tries to make sense of in his recent book, If You’re So Smart, Why Aren’t You Happy?Raghunathan’s writing does fall under the category of self-help (with all of the pep talks and progress worksheets that that entails), but his commitment to scientific research serves as ballast for the genre’s more glib tendencies.
Nearly half of Americans would have trouble finding $400 to pay for an emergency. I’m one of them.
Since 2013,the Federal Reserve Board has conducted a survey to “monitor the financial and economic status of American consumers.” Most of the data in the latest survey, frankly, are less than earth-shattering: 49 percent of part-time workers would prefer to work more hours at their current wage; 29 percent of Americans expect to earn a higher income in the coming year; 43 percent of homeowners who have owned their home for at least a year believe its value has increased. But the answer to one question was astonishing. The Fed asked respondents how they would pay for a $400 emergency. The answer: 47 percent of respondents said that either they would cover the expense by borrowing or selling something, or they would not be able to come up with the $400 at all. Four hundred dollars! Who knew?
For some, abandoning expensive urban centers would be a huge financial relief.
Neal Gabler has been a formative writer for me: His Winchell: Gossip, Power, and the Culture of Celebrity was one of the books that led me to think about leaving scholarship behind and write nonfiction instead, and Walt Disney: The Triumph of the American Imagination was the first book I reviewed as a freelance writer. To me, he exemplifies the best mix of intensive archival research and narrative kick.
So reading his recent essay, "The Secret Shame of Middle-Class Americans," was a gut punch: First, I learned about a role model of mine whose talent, in my opinion, should preclude him from financial woes. And, then, I was socked by narcissistic outrage: I, too, struggle with money! I, too, am a failing middle-class American! I, too, am a writer of nonfiction who should be better compensated!
The billionaire’s bid for the nomination was opposed by many insiders—but his success reveals the ascendance of other elements of the party coalition.
In The Party Decides, an influential book about how presidential nominees are selected, political scientists John Zaller, Hans Noel, David Karol, and Marty Cohen argue that despite reforms designed to wrest control of the process from insiders at smoke-filled nominating conventions, political parties still exert tremendous influence on who makes it to general elections. They do so partly through “invisible primaries,” the authors posited—think of how the Republican establishment coalesced around George W. Bush in 2000, long before any ballots were cast, presenting him as a fait accompli to voters who’d scarcely started to think about the election; or how insider Democrats elevated Hillary Clinton this election cycle.
A professor of cognitive science argues that the world is nothing like the one we experience through our senses.
As we go about our daily lives, we tend to assume that our perceptions—sights, sounds, textures, tastes—are an accurate portrayal of the real world. Sure, when we stop and think about it—or when we find ourselves fooled by a perceptual illusion—we realize with a jolt that what we perceive is never the world directly, but rather our brain’s best guess at what that world is like, a kind of internal simulation of an external reality. Still, we bank on the fact that our simulation is a reasonably decent one. If it wasn’t, wouldn’t evolution have weeded us out by now? The true reality might be forever beyond our reach, but surely our senses give us at least an inkling of what it’s really like.
Three Atlantic staffers discuss “Home,” the second episode of the sixth season.
Every week for the sixth season of Game of Thrones, Christopher Orr, Spencer Kornhaber, and Lenika Cruz will be discussing new episodes of the HBO drama. Because no screeners are being made available to critics in advance this year, we'll be posting our thoughts in installments.
The Massachusetts Supreme Court will decide whether a local shrine should be tax-exempt—a decision that could have broad implications for faith organizations in America.
Property-tax battles are rarely sexy. But a case now in front of the Massachusetts Supreme Judicial Court, about whether the 21 religious brothers and sisters who run the Shrine of Our Lady of LaSalette in Attleboro should have to pay taxes, could have huge repercussions. The Court’s decision will be an important part of the ongoing debate in America about who defines religious practice—believers or bureaucrats—and whether religion itself should be afforded a special place under the law.
The case centers on a colonial-era law in Massachusetts that exempts religious houses of worship and parsonages from property taxes if they are used for religious worship or instruction. The shrine has enjoyed this perk since its founding in 1953. But in recent years, the City of Attleboro, nestled between Providence and Boston, has faced a tightening budget. It began looking to see where it could collect more revenue. The shrine, the only major tourist attraction in town, was an obvious target for tax collectors.