Since 1857, The Atlantic has been challenging established answers with tough questions. In this video, actor Michael K. Williams—best known as Omar Little from The Wire—wrestles with a question of his own: Is he being typecast?
What are you asking yourself about the world and its conventional wisdom? We want to hear your questions—and your thoughts on where to start finding the answers: email@example.com. Each week, we’ll update this thread with a new question and your responses.
What the internet does to the mind is something of an eternal question. Here at The Atlantic, in fact, we pondered that question before the internet even existed. Back in 1945, in his prophetic essay “As We May Think,” Vannevar Bush outlined how technology that mimics human logic and memory could transform “the ways in which man produces, stores, and consults the record of the race”:
Presumably man’s spirit should be elevated if he can better review his shady past and analyze more completely and objectively his present problems. He has built a civilization so complex that he needs to mechanize his records more fully if he is to push his experiment to its logical conclusion and not merely become bogged down part way there by overtaxing his limited memory. His excursions may be more enjoyable if he can reacquire the privilege of forgetting the manifold things he does not need to have immediately at hand, with some assurance that he can find them again if they prove important.
Bush didn’t think machines could ever replace human creativity, but he did hope they could make the process of having ideas more efficient. “Whenever logical processes of thought are employed,” he wrote, “there is opportunity for the machine.”
Fast-forward six decades, and search engines had claimed that opportunity, acting as a stand-in for memory and even for association. In his October 2006 piece “Artificial Intelligentsia,” James Fallows confronted the new reality:
If omnipresent retrieval of spot data means there’s less we have to remember, and if categorization systems do some of the first-stage thinking for us, what will happen to our brains?
I’ve chosen to draw an optimistic conclusion, from the analogy of eyeglasses. Before corrective lenses were invented, some 700 years ago, bad eyesight was a profound handicap. In effect it meant being disconnected from the wider world, since it was hard to take in knowledge. With eyeglasses, this aspect of human fitness no longer mattered in most of what people did. More people could compete, contribute, and be fulfilled. …
It could be the same with these new computerized aids to cognition. … Increasingly we all will be able to look up anything, at any time—and, with categorization, get a head start in thinking about connections.
But in Nicholas Carr’s July 2008 piece “Is Google Making Us Stupid?,” he was troubled by search engines’ treatment of information as “a utilitarian resource to be mined and processed with industrial efficiency.” And he questioned the idea that artificial intelligence would make people’s lives better:
It suggests a belief that intelligence is the output of a mechanical process, a series of discrete steps that can be isolated, measured, and optimized. In Google’s world, the world we enter when we go online, there’s little place for the fuzziness of contemplation. Ambiguity is not an opening for insight but a bug to be fixed. The human brain is just an outdated computer that needs a faster processor and a bigger hard drive.
Even as Carr appreciated the ease of online research, he felt the web was “chipping away [his] capacity for concentration and contemplation.” It was as if the rote tasks of research and recall, far from wasting innovators’ time, were actually the building blocks of more creative, complex thought.
On the other hand, “you should be skeptical of my skepticism,” as Carr put it. And from the beginning, one great benefit of the internet was that it brought people in contact not just with information, but with other people’s ideas. In April 2016, Adrienne LaFrance reflected on “How Early Computer Games Influenced Internet Culture”:
In the late 1970s and early 1980s, game makers—like anyone who found themselves tinkering with computers at the time—were inclined to share what they learned, and to build on one another’s designs. … That same culture, and the premium it placed on openness, would eventually carry over to the early web: a platform that anyone could build on, that no one person or company could own. That idea is at the heart of what proponents for net neutrality are trying to protect—that is, the belief that openness is a central value, perhaps even the foundational value, of what is arguably the most important technology of our time.
But as tech culture evolved and pervaded life outside the web, even its problem-solving methods began to seem reductive at times. Ian Bogost outlined that paradox in November 2016 when a new product called ketchup leather was billed as the “solution” to soggy burgers:
The technology critic Evgeny Morozov calls this sort of thinking “solutionism”—the belief that all problems can be solved by a single and simple technological solution. … Morozov is concerned about solutionism because it recasts social conditions that demand deeper philosophical and political consideration as simple hurdles for technology. …
But solutionism has another, subtler downside: It trains us to see everything as a problem in the first place. Not just urban transit or productivity, but even hamburgers. Even ketchup!
So, what’s your personal experience of how the internet affects creativity? Can you point to a digital distraction—Netflix, say, or Flappy Bird—that’s enriched your thinking in other areas of your life? On the flip side of the debate, can you point to a tool like email or Slack that’s sharpened your efficiency but narrowed the scope of your ideas? We’d like to hear your stories; please send us a note: firstname.lastname@example.org.
Because of the Internet I write more and receive feedback from people I know (on Facebook) and online strangers (on TAD and other platforms that use Disqus). I use it as a jumping-off place and resource for planning lessons for my high-school students in science.
However, I don’t practice music as often as I used to.
On a similar note, another reader confesses, “I draw less because I’m always on TAD”:
As a sketch artist, I appreciate my ability to Google things I want to draw for a reference point, but that doesn’t make me more creative. I already had the image in my head and the ability to draw. I honed my skills drawing people the old fashioned way, looking at pictures in books or live subjects and practicing till my fingers were going to fall off.
In my opinion, the internet also encourages people to copy the work of others that goes “viral” rather than creating something truly original. The fact that you can monetize that viral quality also makes it more likely that people will try to copy rather than create.
That’s the same reason a third reader worries that “the internet has become stifling for creativity”:
Maybe I am not looking in the right place, but most platforms seem to be more about reblogging/retweeting/reposting other people’s creations. Then there is the issue of having work stolen and credits removed.
As another reader notes, “This is the central conflict of fan fiction”:
It’s obviously creative. On the other hand, it is all based on blatant copying of another writer’s work. How much is this a huge expansion of a creative outlet, and how much is this actually people choosing to limit their own creativity by colonizing somebody else’s world rather than creating a new one?
For my part, I tend to think the internet has encouraged and elevated some amazing new forms of creativity based on reaction and re-creation, collaboration and synthesis. Take this delightful example:
Those creative forms are a big part of my job too: When I go to work, I’m either distilling my colleagues’ articles for our Daily newsletter or piecing together reader emails for Notes, and those curatorial tasks have been exciting and challenging in ways that I never expected. But I’ve also missed writing fiction and poetry and literary criticism, and I worry sometimes that I’m letting those creative muscles atrophy. If you’re a fanfic reader or writer (or videographer, or meme-creator, or content-aggregator) and would like to share your experience, please let us know: email@example.com.
This next reader speaks up for creativity as “the product of synthesis”:
It’s not so much a quest for pure “originality,” as it is a quest for original perspectives or original articulations. I’d say that my creativity has been fueled by letting myself fall into occasional rabbit holes. Whether that’s plodding through artists I don’t know well on Spotify or following hyperlinks in a Wiki piece until I have forgotten about what it was that I initially wondered, that access to knowledge in a semi-random form triggers the old noggin like little else.
On the other hand: So much knowledge! So many rabbit holes! Jim is paralyzed:
I find many more ideas and inspirations, but the flow of information and ideas is so vast that I never find time to develop them. I need to get off the internet.
Diane is also exasperated:
The promise of digital technology was: spinning piles of straw into useful pieces of gold.
My reality is: looking for golden needles in a giant haystack of unusable straw.
I spend so much time looking for the few things actually useful to my project, my writing, my daily info needs, and by the end of the day I feel like I’ve wasted so much time and effort sorting through useless crap. And the pile of useless keeps getting bigger and bigger, like a bad dream.
This next reader provides some tips for productive discovery:
I am old enough to vaguely recall a time before I began to use the internet on a daily basis. What I would do, back then, when I got stuck and could not find a creative angle on a problem, was to go to some arbitrary corner of the library, take down the first book that caught my interest even though it had nothing to do with the problem at hand, and read a few pages—sometimes, the whole book. More often than not, it would trigger all sorts of analogies, and at least a few of them usually turned out to be fruitful. (Even if nothing turned out to be relevant, I usually still learned something interesting, so it was a win-win strategy.) It was a great way (to borrow Horace Walpole’s definition of serendipity) to make discoveries, by accidents and sagacity, of things one were not in quest of.
I try to use the internet in a somewhat similar fashion: When I’m stuck, I often spend a morning strolling around arbitrary corners of the internet, trying to discover stuff I did not know I was in quest of. Typically, I start in some academic resource like JSTOR. (I almost always start by limiting my search to articles at least 50 years old; it ensures that one does not end up reading fashionable stuff and thus thinking the same thoughts as all the other hamsters in the academic wheel. Also, older articles are usually far more well-written than the crap that results from the publish-or-perish system.) I am not above using e.g. Wikipedia, though, at least as a point of departure.
I also like reading old stuff in online newspaper/magazine archives. Sometimes, a stray remark in one of those wonderful 19th-century magazines written by and for men of letters is all you need to get a fresh angle on a familiar problem.
Gotta love those 19th-century magazines. In some ways, their mission wasn’t so different from that of the Facebook groups and Reddit threads and Disqus forums of today: creating a space for discourse and exchange and reflection, where exciting new ideas could bump up against each other. As James Russell Lowell, The Atlantic’s founding editor, wrote to a friend in 1857, “The magazine is to be free without being fanatical, and we hope to unite in it all available talent of all modes of opinion.” And as Terri, one of the founding members of TAD, reflects today:
TAD itself has been a creative endeavor for me and the other mods. Envisioning the community we wanted. Coming up with ideas to bring it to life. We developed ideas around the mix of politics, open and fun threads that the community has taken on and grown. It really has been a creative experience in collaboration on the internet.
Check out TAD’s whole discussion on creativity here, as well as many more. As for the offline benefits of online collaboration, take it from this reader—a “furniture maker and Weimaraner enthusiast”:
I would like to share a story about a project I am working on in which the internet has certainly aided my creativity. Zeus, our 8-month-old Weimaraner, is a couch hog. When my girlfriend and I sit down on the couch to watch TV, he will sit directly in front of us and bark until we make room for him. There are three large dog beds in the house, but Zeus steadfastly refuses to lie on the dog beds.
I am a member of a Weimaraner-owner Facebook group called Weim Crime. Several people in the group have had similar problems. We came up with a solution I tested out last week: build a dog bunk bed with one bed on the bottom and one bed about the same height as our couch.
It has worked out very well. Zeus quietly relaxes on the top dog bunk while we sit on the couch. I am now collecting feedback from that same group before building the more attractive final version. I have received very useful feedback—for example, lowering the top bunk deck to 18 inches or lower to prevent joint injuries. My end goal is to design and build a simple, low-cost dog bunk bed that is more attractive than the prototype and post a YouTube video showing other owners how to build a similar one.
This is just one silly project, but the feedback and interest I have receiving regarding the project has been really inspiring.
What questions about your day-to-day experience of the world have you been pondering? We welcome your feedback and inspirations. Check back Monday for the next discussion question in this series—and in the meantime, enjoy some Weimaraner art:
That’s the question that reader John Harris has been asking himself lately. He’s not alone: In 1862, one of The Atlantic’s founders, Ralph Waldo Emerson, wondered the same thing about aging. Acknowledging that “the creed of the street is, Old Age is not disgraceful, but immensely disadvantageous,” Emerson set out to explain the upsides of senescence. A common theme is the sense of serenity that comes with age and experience:
Youth suffers not only from ungratified desires, but from powers untried, and from a picture in his mind of a career which has, as yet, no outward reality. He is tormented with the want of correspondence between things and thoughts. … Every faculty new to each man thus goads him and drives him out into doleful deserts, until it finds proper vent. … One by one, day after day, he learns to coin his wishes into facts. He has his calling, homestead, social connection, and personal power, and thus, at the end of fifty years, his soul is appeased by seeing some sort of correspondence between his wish and his possession. This makes the value of age, the satisfaction it slowly offers to every craving. He is serene who does not feel himself pinched and wronged, but whose condition, in particular and in general, allows the utterance of his mind.
By 1928, advances in medicine had made it more possible to take a long lifespan for granted. In an Atlantic article titled “The Secret of Longevity” (unavailable online), Cary T. Grayson noted that “probably at no other time in the history of the human race has so much attention been paid to the problem of prolonging the span of life.” He offered a word of warning:
Any programme which has for its object the prolongation of life must also have, accompanying this increased span of life, the ability of the individual to engage actively and with some degree of effectiveness in the affairs of life. Merely to live offers little to the individual if he has lost the ability to think, to grieve, or to hope. There is perhaps no more depressing picture than that of the person who remains on the stage after his act is over.
On the other hand, as Cullen Murphy contended in our January 1993 issue, an eternity spent with no decrease in faculties wouldn’t necessarily be desirable either:
There are a lot of characters in literature who have been endowed with immortality and who do manage to keep their youth. Unfortunately, in many cases nobody else does. Spouses and friends grow old and die. Societies change utterly. The immortals, their only constant companion a pervading loneliness, go on and on. This is the pathetic core of legends like those of the Flying Dutchman and the Wandering Jew. In Natalie Babbitt’s Tuck Everlasting, a fine and haunting novel for children, the Tuck family has inadvertently achieved immortality by drinking the waters of a magic spring. As the years pass, they are burdened emotionally by an unbridgeable remoteness from a world they are in but not of.
Since antiquity, Murphy wrote, literature has had a fairly united stance on immortality: “Tamper with the rhythms of nature and something inevitably goes wrong.” After all, people die to make room for more people, and pushing lifespans beyond their ordinary limits risks straining resources as well as reshaping families.
Charles C. Mann examined some of those potential consequences in his May 2005 Atlantic piece “The Coming Death Shortage,” predicting a social order increasingly stratified between “the very old and very rich on top … a mass of the ordinary old … and the diminishingly influential young.” Presciently, a few years before the collapse of the real-estate bubble that wiped out millions of Americans’ retirement savings, Mann outlined the effects of an increased proportion of older people in the workforce:
When lifespans extend indefinitely, the effects are felt throughout the life cycle, but the biggest social impact may be on the young. According to Joshua Goldstein, a demographer at Princeton, adolescence will in the future evolve into a period of experimentation and education that will last from the teenage years into the mid-thirties. … In the past the transition from youth to adulthood usually followed an orderly sequence: education, entry into the labor force, marriage, and parenthood. For tomorrow’s thirtysomethings, suspended in what Goldstein calls “quasi-adulthood,” these steps may occur in any order.
In other words, Emerson’s period of “ungratified desires and powers untried” would be extended indefinitely. Talk about doleful deserts! On top of such Millennial malaise, Mann also predicted increased marital stress, declining birth rates, a depleted labor force, and a widespread economic slowdown as the world’s most powerful nations entered a “longevity crisis.”
But that’s just one vision. Another came from Gregg Easterbrook, who anticipated “a grayer, quieter, better future” in his October 2014 Atlantic article “What Happens When We All Live to 100?” His argument has some echoes of Emerson’s, but with modern science to back it up:
Neurological studies of healthy aging people show that the parts of the brain associated with reward-seeking light up less as time goes on. Whether it’s hot new fashions or hot-fudge sundaes, older people on the whole don’t desire acquisitions as much as the young and middle-aged do. Denounced for generations by writers and clergy, wretched excess has repelled all assaults. Longer life spans may at last be the counterweight to materialism.
Deeper changes may be in store as well. People in their late teens to late 20s are far more likely to commit crimes than people of other ages; as society grays, the decline of crime should continue. Violence in all guises should continue downward, too. … Research by John Mueller, a political scientist at Ohio State University, suggests that as people age, they become less enthusiastic about war. Perhaps this is because older people tend to be wiser than the young—and couldn’t the world use more wisdom?
It’s a good point. Couldn’t we all use more wisdom, more experience, more opportunities to learn? Wouldn’t we make better use of our lives if our lives went on forever? Not so fast, Olga Khazan wrote last month:
A common fear about life in our brave, new undying world is that it will just be really boring, says S. Matthew Liao, director of the Center for Bioethics at New York University. Life, Liao explained, is like a party—it has a start and end time. … “But imagine there’s a party that doesn’t end,” he continued. “It would be bad, because you’d think, ‘I could go there tomorrow, or a month from now.’ There’s no urgency to go to the party anymore.”
The Epicureans of ancient Greece thought about it similarly, [psychologist Sheldon] Solomon said. They saw life as a feast: “If you were at a meal, you’d be satiated, then stuffed, then repulsed,” he said. “Part of what makes each of us uniquely valuable is the great story. We have a plot, and ultimately it concludes.”
Even so, some futurists believe immortality is within reach:
So, what do you think: Is there a limit to how long people should live? Is it selfish to want eternity for yourself, or would having even a few immortals around make the world better for everyone? Here’s one reader’s take:
This reminds me a bit of the cylons in the “new” Battlestar Galactica.
With the ability to reincarnate infinitely, and be effectively immortal, they were callous towards humans, and killed humans with impunity. It was only when their ability to reincarnate was ended and they became effectively mortal (and thus subject to basically the same rules of death as humans) that they were driven to behave in a moral way.
But another reader argues:
I for one think the world would be a better place if we collectively took a longer view, and what better way to do that than to give everyone a stake in it?
Living a long life seems the obvious goal for most people, and many of them, like Dylan Thomas, raged against the dying of the light. Others—like the transhumanists that Olga featured recently—want to transcend death entirely.
Well, like most things, the answer is not a simple yes or no; it depends—on so many factors, some of which we can control (e.g. not smoking) and can’t control (e.g. our genetic make-up). If you’re in good health physically and have all your faculties and some purposeful work or hobby, or just something you really enjoyed doing, then maybe it might be a good idea to live a long life. But those are a lot of ifs.
Another reader, John, looks to human connections:
Health is essential to making survival good, but it also helps to have a caring partner, for companionship and support. I am biased, because at 81, I have my health and a good wife. I’d like to live past 100 if these conditions remain. But if I become disabled, chronically ill or alone, life is unlikely worth it.
Rita has a bleaker outlook:
Looking at my genetics, I’m starting to think I may live a long time. I’m not yet 70, but I can probably expect to go until 95 at least.
This doesn’t fill me with joy. Who’s going to look after me when my eyesight starts to crap out and I get weaker? Where’s the money going to come from to continue to pay my bills? These are not minor questions. Their answers, as far as I can see, are “nobody” and “nowhere.”
And anyway, it’s not as if I can look forward to hiking in the desert or exploring foreign cities in my extreme old age. Nor will many of us be directing films or conducting research in our nineties. What most of us can anticipate is day after day staring at a TV set, wondering if anyone is coming for a visit.
She adds, “That Atlantic excerpt you cited from 1928 nails it”—namely, “Any programme which has for its object the prolongation of life must also have, accompanying this increased span of life, the ability of the individual to engage actively and with some degree of effectiveness in the affairs of life.” Another reader, Bernyce, is also worried about infirmity:
After the age of 75, the human body declines—if not steadily, then in jerks and/or slopes. People begin to loose hearing, eyesight, and useful teeth, as well as the ability to digest food that may be ingested. A younger friend (74), living in an assisted-living facility because her son lives 200 miles away and she is no longer able to walk, says her companions say the food is delicious. She says the desserts are tasteful but everything else is flavorless and slippery. People who have loved ones to care for them may be more fortunate.
Watch the French film Amour. It is a short, beautiful, and painful glimpse of the end of life in a loving marriage. Even when we are not alone, the end of life is very difficult.
Here’s the haunting trailer for Amour:
At the somewhat advanced age of 88 (and I’ll be 89 in a few days), I’m tired. I think I’ve accomplished all I’m capable of and am ready to rest … permanently, I guess. Curious to see what, if anything, comes next. I’ll let you know.
Jim’s “I’m tired” reminds me of a similar sigh of acceptance that came from William Buckley during one of his final interviews, before dying at the age of 82:
The clip is worth watching in full, even if you’re no fan of the conservative figure, but it begins with Charlie Rose asking Buckley if he wishes he were 20 again, and he replies:
No, absolutely not. If I had a pill that would reduce my age by 25 years I wouldn’t take it. Because I’m tired of life. I really am. I am utterly prepared to stop living on. There are no enticements to me that justify the weariness, the repetition ...
Buckley goes on to quote Sherwin Nuland—a surgeon, professor of bioethics, and author of How We Die: Reflections on Life’s Final Chapter—who once said, “The greatest enemy of older people is young doctors,” because they’re determined to keep you alive at any cost. This next reader would likely fight them off:
I am ready to go at 61. We have no problem helping our sick and injured pets, farm animals, etc. find final peace, and now people are beginning to evolve on this point too. Thank god. (Yes, I think god would agree.)
Let’s face it, after 60, folks begin kicking the ol’ bucket from normal end of life reasons. Seems the body remembers “hard” living in the early years. And this is okay. I’m reading The Razor’s Edge right now and that helps me understand.
As an 85 year old, I recognize that my usefulness is coming to a close.
At this time, I seem to provide joy to my children and grandchildren.
When I become a liability and need the constant care of others, I am content to have my life end, even if I have to take care of that myself.
At this time I do not need nor want that kind of care. But it may come soon, and I can face that comfortably.
John quips, “At 74, I have recently said to my adult children, ‘You know, this getting old is getting old.’” Sharon is a very longtime reader:
Dear Atlantic, magazine of my youth and age;
I believe that one’s life should be as long as one can make a contribution in some way. For me, personally, I wish to live only as long as I can be useful. At 72, and a few years before, I made the decision that when I felt I could no longer contribute in a tangible way, I will end my life.
I was greatly miffed by an article by a know-all person of the psychiatric persuasion, who said that anyone who wished to end his or her life was depressed. In my opinion, that’s balderdash. My firm belief is that we should live only as long as we can help to decrease our particular footprint on the planet by benefitting others. My desire is to have 15 years of retirement, but if I can't meet my personal hook, I’ll discard that goal.
I think it is immoral to artificially prolong the physical existence of an individual who is in no more than a vegetative state. On the other hand, I believe that no one has the right to make that choice for another person.
Kent has some advice:
I think everyone should think about a long life, and when you’re about halfway there or within 30 years of being there, set yourself a goal of how old and how alert you want to be. It’s likely to affect your health and wealth by making you focus on more important things in life and your ability to experience them. The earth doesn’t owe anyone longevity so it’s up to you to figure out what and where and when you’ll take charge of your existence and final stages of life.
Kent’s note reminds me of my stepfather, who’s approaching 70 and has a really wise approach to the remainder of his life: Instead of focusing on how long he’s going to live, he’s focused on how short he can make the window of time he’ll be infirmed. By eating healthy, cycling dozens of miles per week, and generally keeping his stress low, he’s determined to shrink that final period as much as possible.
This next reader, Rachel, also looks to her parents:
I am compelled to write to you! That has never happened before.
In the last six years, I saw both my parents off this planet. Both were happy to go and did not overstay. My mother, always in good health, had hoped for some more years but fell ill. Once that happened, she did not want to linger. It was too physically painful.
My father simply grew lonely and disinterested, and he too welcomed the end. He actually asked me hasten it for him, but I reminded him it was against the law (!)
Now I have my parents-in-law. He is a priest whose life revolved around being connected to others and doing pastoral work but who has recoiled into himself these last five years and today makes no contribution to anyone, anything, anywhere. This is so wrong. He could bring meaning to people but has closed those doors.
My mother-in-law, who has had to put him into a home because she cannot care for him, spends her days wracked with guilt for having done so. While he abhors the thought of death (I thought he would want to go to his maker??), she welcomes it—to be relieved of her guilt.
But neither is dying soon. What kind of life is this for them and their families, everybody’s pocketbook, and the earth’s resources?
I am soon 58 and HAVE NO DESIRE to live long. My parents checked out at 87 and 89 and I would be happy to go sooner, while I am still making some contribution to the world and to my loved ones.
Emma contributes through teaching:
Life that includes giving, sharing, and caring for others is worth it. In contrast, life as a “parasite”—endlessly entertained by television and card games—is perhaps a more arrogant use of resources. Of course I can say this now, at age 75, the day I teach a Chinese emigre English, the day after I teach three little girls piano, and the day when I will soon perform music for fellow residents in our retirement community.
What will I say ten years from now, when all I hope for is to see my grandchildren safely through adolescence, and I have no energy to spare for what I do now, I do not know.
“If we are being quite frank, there are a few exceptional people who may have something special to give to humanity, but the vast majority of people are simply useless.”
Are the majority of people useless? If we only consider people who have made contributions to the world through their inventions, philosophies, scientific or medical research, political leadership, military or business achievements, etc., then I would agree that the vast majority of people would seem to be useless.
However, every person who has ever lived on the face of the earth has influenced or impacted the lives of those around them in ways we know nothing about, unless their life touched us personally. And then only I can know how they impacted my own life experience.
Some of the peoples’ influences were/are positive and constructive; some negative and destructive. But they all contribute to the evolutionary process of the human consciousness and therefore each person’s experience, which in turn influences the lives of people of succeeding cultures and generations.
The greater question to me is why we are here at all. What is the reason or need for our actual existence? But this gets into a philosophical discussion that could go on and on.
The initial wave of reader response to our question “Is a long life really worth it?” was overwhelming “meh, not so much.” But since then, many sexagenarians, septuagenarians, octogenarians, and nonagenarians have emailed more enthusiastic outlooks on old age. Here’s Jim:
A thought-provoking discussion, but it really misses the key point. I turn 65 in a couple of months, but I don’t expect to “retire” at 65—or ever. I’m fit and healthy and having the greatest fun of my life at the head of a fast-growing business. In a quarter century, if still alive, I might have to slow down a bit, but there will still be something useful for me to do.
The founding pastor of our church has poor hearing and is almost blind, but a few weeks ago he preached a great sermon to celebrate his 100th birthday. He still contributes in other ways as well.
Not everyone can continue working, but there is a huge need for volunteers in areas that do not require physical agility. Unless totally senile—and that’s something that will never happen to most of us—we all have something to offer.
Maggie is a quarter century older than Jim but has a very similar view:
Life isn’t over because I’m not longer “useful.” I’m 90 and have spent the last decade trying to be okay with not always being the helping hand. Though my greatest joy has come from knowing I have touched another’s life by being helpful, I have to remember that I am still touching people’s lives as long as I am alive. I’m so pleasantly surprised that people want to be around me.
I was pretty grim when I had to stop driving because a slight accident damaged the car beyond repair. My health also gave way and I was briefly hospitalized. It was a big adjustment. But now I am walking, exercising at the gym once a week, taking part in demonstrations, and forgetting about how old I am. I don’t see any other options.
In three weeks I will have my 90th birthday. I am certainly glad I did not die at 75. Since then, I have seen four more grandchildren born, two grandchildren graduate from college, and two from high school. I sold my financial advisory firm to my partners and helped start a new Trust Company, now serving as Regional Director and on their Board. I have had some wonderful trips and been able to enjoy sailing, tennis, and horseback riding up until two years ago. I have recently bought a set of golf clubs and look forward to enjoying a new sport.
Carol frames aging this way:
Everyone has three ages: chronological, biological, and mental. (The most important, by far, is our mental age.) I’m chronologically 81, biologically 65 and mentally 60.
Tony adds some perspective:
Consider this: Well into his 80s, Verdi [the Italian composer] was still at it; ahead were two of his greatest operas, Otello and Falstaff. And Michelangelo was still there, chisel in hand, well into his 80s. Problem is, we think it’s all over—but life, and sometimes ourselves too, always has a surprise in store.
Maureen calls old age “my blessing”:
I will be 70 on my next birthday! I have finally begun to live my truth. I am fortunate in that I have an appreciation for life that never occurred to me in my younger years. I love every sunrise and sunset. I enjoy watching the bunnies, hummingbirds, lizards, and butterflies. My grandchildren enjoy my company. I am my husband’s best friend. I have a deep spiritual connection. I take nothing for granted.
Life for me is beautiful—not because it is perfect, but because it is lovely even in its imperfection. I have made peace with my past and have no fears for my future. I am grateful for every moment! I will stay here on this amazing planet as long as I can.
Another positive outlook comes from Charlie:
Aging is not a sickness or a disease. No one yet has died knowing all there is to know and enjoying everything there is to enjoy! So why not try to be that first? Optimism, positivism, aggressiveness, regardless of your age, is what it means to be human. Cells may die and energy may lessen. But whatever is left should be used to live and love as fully as possible. We are always and ever in the process of becoming!
Joyce has some tips for healthy living in your eighties:
I read Ezekiel Emanuel’s article [“Why I Hope to Die at 75”] and agree with much of it; I certainly don’t want to have lots of effort made to keep me alive if I should unfortunately end up in a hospital and have no intention of any surgeries.
However, I am 83 and not hoping to die any time soon. I am unusually healthy for my age and do many things to remain so: I take no prescription drugs; I exercise regularly including weight lifting, walking and Tai Chi; I eat well, including fresh vegetable juice every day or so; I have good regular connections with family and close friends; I experience good art forms, including playing the piano, singing, movies, novels (currently my husband, who is 85, and I are watching the fine BBC series Lark Rise to Candleford and reading aloud together Margaret Atwood’s novel Blind Assassin).
Karen is 80 years old and wisely keeps her smartphone at bay:
I go for long walks every day it isn’t raining or unbearably cold. It is my job to keep myself as mobile and as healthy as possible. I don’t wear headphones or keep my phone on when I walk. I want to observe what wonders nature is revealing: sights, sounds, odors. I find the sound of the ocean is restful and restorative. As I near the end of my life, birds, otters, flowers, sunrises and sunsets take on extra meaning for I know I have a limited time in which to enjoy them.
And Nancy shares a great saying:
I am 79 and still teaching college courses—for another year at least, if lucky. Then for as long as I am able, I will continue to volunteer. As a good friend said, “You ought to be all spent up before you go.”
Here’s how an Atlantic author answered that question in September 1858:
Full of anticipations, full of simple, sweet delights, are these [childhood] years, the most valuable of [a] lifetime. Then wisdom and religion are intuitive. But the child hastens to leave its beautiful time and state, and watches its own growth with impatient eye. Soon he will seek to return. The expectation of the future has been disappointed. Manhood is not that free, powerful, and commanding state the imagination had delineated. And the world, too, disappoints his hope. He finds there things which none of his teachers ever hinted to him. He beholds a universal system of compromise and conformity, and in a fatal day he learns to compromise and conform.
But it wasn’t until the 20th century that scientists began to seriously study child development. In our July 1961 issue, Peter B. Neubauer heralded “The Century of the Child”:
Gone is the sentimental view that childhood is an era of innocence and the belief that an innate process of development continuously unfolds along more or less immutable lines. Freud suggested that, from birth on, the child’s development proceeds in a succession of well-defined stages, each with its own distinctive psychic organization, and that at each stage environmental factors can foster health and achievement or bring about lasting retardation and pathology. …
Freudian psychology does not, as some people apparently imagine, provide a set of ready-made prescriptions for the rearing of children. … The complexity of the interactions between mother and child cannot be reduced to rigid formulas. Love and understanding cannot be prescribed, and if they are not genuinely manifested, the most enlightened efforts to do what is best for the child may not be effective.
According to this view, children weren’t miniature adults, but they were preparing for adulthood. Growing up was a process that had to be managed by adults, which made the boundaries of childhood both more important and more nebulous.
A few years later, in our October 1968 issue, Richard Poirier described the backlash to a wave of campus protests as “The War Against the Young.” He implored older adults to take young people’s ideas seriously:
It is perhaps already irrelevant, for example, to discuss the so-called student revolt as if it were an expression of “youth.” The revolt might more properly be taken as a repudiation by the young of what adults call “youth.” It may be an attempt to cast aside the strangely exploitative and at once cloying, the protective and impotizing concept of “youth” which society foists on people who often want to consider themselves adults.
What’s more, Poirier argued, idealism shouldn’t just be the province of the young:
If young people are freeing themselves from a repressive myth of youth only to be absorbed into a repressive myth of adulthood, then youth in its best and truest form, of rebellion and hope, will have been lost to us, and we will have exhausted the best of our natural resources.
But how much redefinition could adulthood handle? In our February 1975 issue, Midge Decter addressed an anxious letter to that generation of student revolutionaries, who—though “no longer entitled to be called children”—had not yet fulfilled the necessary rites of passage for being “fully accredited adults”:
Why have you, the children, found it so hard to take your rightful place in the world? Just that. Why have your parents’ hopes for you come to seem so impossible of attainment?
Some of their expectations were, to be sure, exalted. … But … beneath these throbbing ambitions were all the ordinary—if you will, mundane—hopes that all parents harbor for their children: that you would grow up, come into your own, and with all due happiness and high spirit, carry forward the normal human business of mating, home-building, and reproducing—replacing us, in other words, in the eternal human cycle. And it is here that we find ourselves to be most uneasy, both for you and about you.
Decter blamed this state of affairs on overindulgent parenting: Adults, she argued, had failed their children by working too hard to protect them from unhappiness and by treating their “youthful rebellion” with too much deference.
The next decades’ developments in child psychology gave parents new advice. In our March 1987 issue, Bruno Bettelheim stressed the importance of letting kids guide their own play, without parents pushing them to obey rules they aren’t yet developmentally ready for. And in our February 1990 issue, Robert Karen outlined attachment theorists’ recommendations for how to “enable children to thrive emotionally and come to feel that the world of people is a positive place”—standards measured in part by a baby’s willingness to explore apart from its mother.
Were these parenting styles encouraging kids’ independence, or failing to push them hard enough? A generation after Decter, in Lori Gottlieb’s 2011 Atlantic piece “How to Land Your Kid in Therapy,” she also worried about parental indulgence:
The message we send kids with all the choices we give them is that they are entitled to a perfect life—that, as Dan Kindlon, the psychologist from Harvard, puts it, “if they ever feel a twinge of non-euphoria, there should be another option.” [Psychologist Wendy] Mogel puts it even more bluntly: what parents are creating with all this choice are anxious and entitled kids whom she describes as “handicapped royalty.” …
When I was my son’s age, I didn’t routinely get to choose my menu, or where to go on weekends—and the friends I asked say they didn’t, either. There was some negotiation, but not a lot, and we were content with that. We didn’t expect so much choice, so it didn’t bother us not to have it until we were older, when we were ready to handle the responsibility it requires. But today, [psychologist Jean] Twenge says, “we treat our kids like adults when they’re children, and we infantilize them when they’re 18 years old.”
In Hanna Rosin’s April 2014 article “The Overprotected Kid,” she lamented the loss of independence that once helped kids come of age:
One common concern of parents these days is that children grow up too fast. But sometimes it seems as if children don’t get the space to grow up at all; they just become adept at mimicking the habits of adulthood. As [geographer Roger] Hart’s research shows, children used to gradually take on responsibilities, year by year. They crossed the road, went to the store; eventually some of them got small neighborhood jobs. Their pride was wrapped up in competence and independence, which grew as they tried and mastered activities they hadn’t known how to do the previous year. But these days, middle-class children, at least, skip these milestones. They spend a lot of time in the company of adults, so they can talk and think like them, but they never build up the confidence to be truly independent and self-reliant.
Yet how exactly do you measure “true” independence and self-reliance? And what’s the final milestone that marks the transition to adulthood? Decter suggests it’s settling down with a stable career and a family. But in Julie Beck’s 2016 Atlantic piece, “When Are You Really an Adult?,” she places that rite of passage in historical context:
The economic boom that came after World War II made Leave It to Beaver adulthood more attainable than it had ever been. Even for very young adults. There were enough jobs available for young men, [historian Steven] Mintz writes, that they sometimes didn’t need a high-school diploma to get a job that could support a family. And social mores of the time strongly favored marriage over unmarried cohabitation hence: job, spouse, house, kids. But this was a historical anomaly. …
Many young people, [psychologist Jeffrey] Jensen Arnett says, still want these things—to establish careers, to get married, to have kids. (Or some combination thereof.) They just don’t see them as the defining traits of adulthood. Unfortunately, not all of society has caught up, and older generations may not recognize the young as adults without these markers. A big part of being an adult is people treating you like one, and taking on these roles can help you convince others—and yourself—that you’re responsible.
So, adults: What convinced you? Many readers have discussed the topic already, and we’d like to reopen the call for your stories—this time with an eye to the gaps between what it takes to feel like an adult and what it takes to be seen as one. Did you feel you’d become an adult long before you got treated like one? Or have you passed the markers of adulthood without quite feeling you’ve fully grown up? If you’re a parent, when did you feel your kids had grown up, or what will it take to make you certain? Please send your answers—and questions—to firstname.lastname@example.org.
When I was 11, my mother died. My father had become blind a few years before, from a rare form of glaucoma. He had no choice but to allow me to do things that are normally done by an adult, such as budgeting and paying bills, cooking and cleaning, and other various things. He had to talk to me in an honest way, and make me understand things and rely on my judgement in lots of matters. Other adults did too. I was never a child again after my mother died and my dad knew it.
Another reader’s mother also died at a pretty young age:
I became an adult when my mother died and my dad started dating four months later. I was 20 years old. Once he had a new woman in his life (whom he is still married to now) and essentially a new family, I was out. We had really started to be at odds the year before, when I had started to do things my way instead of his way. He had pretty much taken for granted that I could make it in this world without his advice or anything.
For this next reader, it was boarding school:
I’m not sure the end of childhood is the sort of thing that one can pinpoint; seems to me there were rather a number of distinct rites of passage. The first was when I went to boarding school, around age 10. When my parents dropped me off that first day, I knew I was on my own. Calling home to say they should come get you was not an option; my parents made this pretty clear, but it was not necessary. I knew.
Another reader had to go abroad to step out of childhood:
When I was an exchange student, my father came down to visit. There I was, living independently in a foreign country at 17. I could speak the language fluently and had to navigate us for him.
I was 18 and turned down an Ivy League school and, using my own money, moved to Italy to live with my 25-year-old girlfriend. I think my parents would say I became an adult when, at 26, I handed them a copy of my will and let them know I had been chosen to deploy in a war zone as a civilian alongside a joint counter-terrorism/counter-insurgency military unit. Strangely enough, they weren’t happy with either bit of news.
Rachael was 18 and had just started college:
I received an offer for health insurance in the mail. It went to my home address, and my mom was absolutely thrilled at the notion that I would be off her employer-supplied (but expensive) health insurance. I began paying for my own health insurance at that time, but I also let her know that I wouldn’t allow her to claim me as a dependent on her income taxes anymore. I paid my own expenses (insurance, college, etc). I laugh now when I think of “kids” still being on their parents’ insurance until they’re 26.
This next reader is almost 26:
I became an adult in the past year or so, when my dad learned how much was in my savings account and I mentioned my credit score in the context of considering new car costs. I think my parents assumed up to that point that I was just scraping by and blowing my money irresponsibly, and they were impressed with the degree to which I was caring for myself.
I think calling my dad and asking for advice has helped him to see me differently as well. There’s something about discussing investments and trying to decide on insurance plans that I would assume makes it hard to keep seeing your kid as a kid.
Another reader also nods to financial independence: “I became an adult when I began to pick up the check for my parents by surreptitiously passing plastic to the maître d’ early in the meal.” This next reader entered adulthood in a brutal fashion:
I was 20 and began working at the same factory as my father did. He was in maintenance as an industrial electrician. There had been a summer program for employees’ children and I worked out well enough that I was hired at the end of the summer. He was proud that I carried my weight.
However, three years into it and just after my 23rd birthday, my hand got caught in a take-up roll for a large paper machine and I was flung around like a rag doll. With both femurs and my left ulna, left radius, and left humerus broken, I spent months recovering.
But I kept a good attitude, believing falsely that I would be back to my normal self. My dad told me that he could never have had such a good attitude having gone through what I did.
This next reader also defined his adulthood alongside his dad’s admiration:
I became an adult when I joined the ROTC program my freshman year of college to appease my dad. He got a glimmer of pride in his eyes, saw it as me taking initiative, and was proud that I seemed interested in serving my country. I wasn’t really; I only did it to get him off my back so I could do drugs and other hedonistic things in college. But it was good to have his approval for once and no stress.
The first moment of adulthood was rather mundane for this reader: “Probably the day Mom asked if she could come to my apartment 45 minutes away to use my washing machine because hers was broken.” For another reader, the moment was different for each of his parents:
For my dad, I’d say the writing was on the wall around the time I was 16 and it became apparent that I was physically stronger than he was. He maintained the power of the purse for a few years after, but, considering that it was at about the same time when he’d occasionally offer me a beer, I’m inclined to accept that I was an adult in his eyes.
Mom? Damn, who knows what she really thinks about anything, but I’m guessing she first fully acknowledged my adulthood not through any of the accomplishments or milestones, but when, in 2003, I bought her a car and she therefore had a tangible symbol that could be seen and acknowledged by others.
This next reader’s parents need grandkids before truly considering her an adult:
I’m a 31-year-old married attorney homeowner with no kids. I don’t think my parents look at me like an adult. I don’t think they will until I have kids. This is most obviously manifested in my parents’ constant amnesia of me being a lawyer. They will be having discussions about some legal consideration, I’ll weigh in, yet my opinion is given no weight whatsoever. Though maybe I’m just being whiny that they aren’t taking me seriously. Is that the same thing as not seeing me as an adult? They seem very proud of me, but that doesn’t seem akin to viewing me as an adult either.
Jason might never become an adult in his parents’ eyes:
In some ways, my sister and I will always be “kids.” My mom will randomly start doing things to my hair or bring over something like saucepans that we already have and don’t need. My dad critiques my yard and is always convinced something is wrong with my car (there isn’t, Dad, it’s fine!) I just laugh at this stuff, though; it doesn’t really bother me.
It bothers Doug, though; he finds the topic a “sore spot”:
I’m not convinced my parents believe I’m an adult. I’m 32, the youngest of three, the second most-educated (sister has multiple grad degrees), and the highest earner. I’m constantly getting asked if I’m saving enough or if I need help with anything. Meanwhile, I never ask for help, but both of my older sisters constantly need help. I’m the one who made sure my dad got power of attorney for my grandmother before she totally lost it, and who opened college accounts for my nieces and nephews.
Henry David Thoreau is something of a poster child for solitude. In his essay “Walking,” published just after his death in our June 1862 issue, Thoreau made the case “for absolute freedom and wildness … to regard man as an inhabitant, or a part and parcel of Nature, rather than a member of society”:
We should go forth on the shortest walk, perchance, in the spirit of undying adventure, never to return, prepared to send back our embalmed hearts only as relics to our desolate kingdoms. If you are ready to leave father and mother, and brother and sister, and wife and child and friends, and never see them again—if you have paid your debts, and made your will, and settled all your affairs, and are a free man—then you are ready for a walk.
Thoreau himself was “a genuine American weirdo,” as Jedediah Purdy recently put it, and solitude suited him: His relentless individualism irritated his friends, including Atlantic co-founder Ralph Waldo Emerson, who described Thoreau’s habit of contradicting every point in pursuit of his own ideals as “a little chilling to the social affections.” Emerson may have had Thoreau in mind when, in our December 1857 issue, he mused that “many fine geniuses” felt the need to separate themselves from the world, to keep it from intruding on their thoughts. Yet he questioned whether such withdrawal was good for a person, not to mention for society as a whole:
This banishment to the rocks and echoes no metaphysics can make right or tolerable. This result is so against nature, such a half-view, that it must be corrected by a common sense and experience. “A man is born by the side of his father, and there he remains.” A man must be clothed with society, or we shall feel a certain bareness and poverty, as of a displaced and unfurnished member. He is to be dressed in arts and institutions, as well as body-garments. Now and then a man exquisitely made can live alone, and must; but coop up most men, and you undo them. …
When a young barrister said to the late Mr. Mason, “I keep my chamber to read law,”—“Read law!” replied the veteran, “’tis in the courtroom you must read law.” Nor is the rule otherwise for literature. If you would learn to write, ’tis in the street you must learn it. Both for the vehicle and for the aims of fine arts, you must frequent the public square. … Society cannot do without cultivated men.
Emerson concluded that the key to effective, creative thought was to maintain a balance between solitary reflection and social interaction: “The conditions are met, if we keep our independence, yet do not lose our sympathy.”
Four decades later, in our November 1901 issue, Paul Elmore More identified a radical sympathy in the work of Nathaniel Hawthorne, which stemmed, he argued, from Hawthorne’s own “imperial loneliness of soul”:
His words have at last expressed what has long slumbered in human consciousness. … Not with impunity had the human race for ages dwelt on the eternal welfare of the soul; for from such meditation the sense of personal importance had become exacerbated to an extraordinary degree. … And when the alluring faith attendant on this form of introspection paled, as it did during the so-called transcendental movement into which Hawthorne was born, there resulted necessarily a feeling of anguish and bereavement more tragic than any previous moral stage through which the world had passed. The loneliness of the individual, which had been vaguely felt and lamented by poets and philosophers of the past, took on a poignancy altogether unexampled. It needed but an artist with the vision of Hawthorne to represent this feeling as the one tragic calamity of mortal life, as the great primeval curse of sin … the universal protest of the human heart.
Fast-forward a century, and what More described as “the solitude that invests the modern world” had only gotten deeper invested—while “the sense of personal importance” gained new narcissistic vehicles in the form of social-media tools that let us “connect” online while keeping our real, messy selves as private as we choose. Which is not a bad thing: In some ways, the internet looks like the perfect way to achieve Emerson’s ideal balance between independent thought and social engagement.
A considerable part of Facebook’s appeal stems from its miraculous fusion of distance with intimacy, or the illusion of distance with the illusion of intimacy. Our online communities become engines of self-image, and self-image becomes the engine of community. The real danger with Facebook is not that it allows us to isolate ourselves, but that by mixing our appetite for isolation with our vanity, it threatens to alter the very nature of solitude.
The new isolation is not of the kind that Americans once idealized, the lonesomeness of the proudly nonconformist, independent-minded, solitary stoic, or that of the astronaut who blasts into new worlds. Facebook’s isolation is a grind. What’s truly staggering about Facebook usage is not its volume—750 million photographs uploaded over a single weekend—but the constancy of the performance it demands. More than half its users—and one of every 13 people on Earth is a Facebook user—log on every day. Among 18-to-34-year-olds, nearly half check Facebook minutes after waking up, and 28 percent do so before getting out of bed. The relentlessness is what is so new, so potentially transformative. Facebook never takes a break. We never take a break.
The same year, Brian Patrick Eha also noted the changing nature of solitude—particularly the kind of solitude achieved by wearing headphones in public. “We are each of us cocooned in noise,” he wrote, “and can escape from one another’s only when immersed in our own.” For both Marche and Eha, the problem with technology is not its tendency to isolate people so much as the way it works to prevent us—through a sense of connection or simply through distraction—from fully experiencing that isolation and all it entails.
And as the author Dorthe Nors explained in 2014 for our By Heart series of writer interviews, a full experience of isolation has serious benefits:
The artistic process unfolds in the lonely hours. That’s when the work happens. You have to control the creative energy that you’ve got. You have to discipline yourself to fulfill it. And that work only happens alone.
Solitude, I think, heightens artistic receptivity in a way that can be challenging and painful. When you sit there, alone and working, you get thrown back on yourself. Your life and your emotions, what you think and what you feel, are constantly being thrown back on you. And then the “too much humanity” feeling is even stronger: you can’t run away from yourself. You can’t run away from your emotions and your memory and the material you’re working on. Artistic solitude is a decision to turn and face these feelings, to sit with them for long periods of time.
For Nors, like for Hawthorne, solitude not only enables personal reflection, but also grants access to some deeper, more universal strain of human feeling. That’s the same lesson that Nathaniel Rich, writing in our latest issue, took from the story of Christopher Knight, who spent 27 years living utterly alone in the woods of Maine:
Since his arrest in April 2013, Knight has agreed to be interviewed by a single journalist. Michael Finkel published an article about him in GQ in 2014 and has now written a book, The Stranger in the Woods, that combines an account of Knight’s story with an absorbing exploration of solitude and man’s eroding relationship with the natural world. Though the “stranger” in the title is Knight, one closes the book with the sense that Knight, like all seers, is the only sane person in a world gone insane—that modern civilization has made us strangers to ourselves.
Yet a total withdrawal from civilization can’t be the answer—nor, at a political moment when empathy and understanding seem ever-more-urgently needed, can walling yourself off from other people’s ideas be wise. In February, Emma Green offered this critique of a new book by Rod Dreher, a conservative Christian thinker who calls for like-minded members of his faith to withdraw from public life into communities of their own:
Dreher wrote The Benedict Option for people like him—those who share his faith, convictions, and feelings of cultural alienation. But even those who might wish to join Dreher’s radical critique of American culture, people who also feel pushed out and marginalized by shallowness of modern life, may feel unable to do so. Many people, including some Christians, feel that knowing, befriending, playing with, and learning alongside people who are different from them adds to their faith, not that it threatens it. For all their power and appeal, Dreher’s monastery walls may be too high, and his mountain pass too narrow.
So, tell us about your experience: How do you incorporate solitary reflection into a 21st-century lifestyle? Can you see communitarian benefits in spending more time on your own—or, on the other hand, point to what society loses when more people spend more time alone? Please send your answers (and your questions) to email@example.com.
Coverage of the president’s pressure on Ukraine suggests the media learned nothing from 2016.
If you’ve paid any attention to press retrospectives on the 2016 election, you’ve seen the term false equivalence. It refers to the mismatch between a long-standing procedural instinct of the press and the current realities of the Era of Trump.
Under normal circumstances, the press’s strong preference is for procedural balance. The program’s supporters say this, its critics say that, so we’ll quote both sides of it and leave it to you, the public, to decide who is right.
This approach has the obvious virtue of seeming fair, as a judge is fair in letting the prosecution and defense each make its case. It has a less obvious but very important advantage for news organizations, that of sparing reporters the burden of having to say, “Actually, we think this particular side is right.” By definition, most reporters most of the time are covering subjects in which we’re not expert. Is the latest prime-rate move by the Fed a good idea? Or a bad one? I personally couldn’t tell you. So if I am covering the story, especially on a deadline, I’ll want to give you quotes from people “on both sides,” and leave it there.
The president reportedly sought the help of a foreign government against Joe Biden.
The president of the United States reportedly sought the help of a foreign government against an American citizen who might challenge him for his office. This is the single most important revelation in a scoop by The Wall Street Journal, and if it is true, then President Donald Trump should be impeached and removed from office immediately.
Until now, there was room for reasonable disagreement over impeachment as both a matter of politics and a matter of tactics. The Mueller report revealed despicably unpatriotic behavior by Trump and his minions, but it did not trigger a political judgment with a majority of Americans that it warranted impeachment. The Democrats, for their part, remained unwilling to risk their new majority in Congress on a move destined to fail in a Republican-controlled Senate.
Astronomers have found radio-emitting structures jutting out from our galaxy’s black hole.
Farhad Yusef-Zadeh was observing the center of the Milky Way galaxy in radio waves, looking for the presence of faint stars, when he saw it: a spindly structure giving off its own radio emissions. The filament-like feature was probably a glitch in the telescope, or something clouding the field of view, he decided. It shouldn’t be here, he thought, and stripped it out of his data.
But the mystery filament kept showing up, and soon Yusef-Zadeh found others. What the astronomer had mistaken for an imperfection turned out to be an entire population of cosmic structures at the heart of the galaxy.
More than 100 filaments have been detected since Yusef-Zadeh’s first encounter in the early 1980s. Astronomers can’t completely explain them, but they have given them familiar labels, naming them after the earthly things they resemble: the pelican, the mouse, the snake. The menagerie of filaments is clustered around the supermassive black hole at the center of our galaxy. “They haven’t been found elsewhere,” says Yusef-Zadeh, a physics and astronomy professor at Northwestern University.
A term that once described a vital tradition within the Christian faith now means something else entirely.
Once a month or so Tommy Kidd and I get together for lunch at our favorite taco joint. Over the carnitas and barbacoa and guacamole we catch up on how our writing projects are going, and perhaps gossip a bit about what’s happening at Baylor University, where we both work. And more often than not, we end up talking about our complicated relationship with American evangelical Christianity. Because the future of that movement, which is our movement, matters to us—and, we think, matters to America.
Tommy is a Southern Baptist; I’m an Episcopalian, in the Anglican tradition descending from the Church of England. Very different things, one might think, and in some ways one would be right. Where Tommy’s Church has a praise band, mine has organ music; the central event on Sunday morning at his church is the sermon, while at mine it’s the Eucharist. And yet both of our traditions are closely connected, if in different ways, to evangelicalism.
A lot rides on how parents present the activity to their kids.
They can be identified by their independent-bookstore tote bags, their “Book Lover” mugs, or—most reliably—by the bound, printed stacks of paper they flip through on their lap. They are, for lack of a more specific term, readers.
Joining their tribe seems simple enough: Get a book, read it, and voilà! You’re a reader—no tote bag necessary. But behind that simple process is a question of motivation—of why some people grow up to derive great pleasure from reading, while others don’t. That why is consequential—leisure reading has been linked to a range of good academic and professional outcomes—as well as difficult to fully explain. But a chief factor seems to be the household one is born into, and the culture of reading that parents create within it.
Caught between a brutal meritocracy and a radical new progressivism, a parent tries to do right by his children while navigating New York City’s schools.
To be a parent is to be compromised.You pledge allegiance to justice for all, you swear that private attachments can rhyme with the public good, but when the choice comes down to your child or an abstraction—even the well-being of children you don’t know—you’ll betray your principles to the fierce unfairness of love. Then life takes revenge on the conceit that your child’s fate lies in your hands at all. The organized pathologies of adults, including yours—sometimes known as politics—find a way to infect the world of children. Only they can save themselves.
Our son underwent his first school interview soon after turning 2. He’d been using words for about a year. An admissions officer at a private school with brand-new, beautifully and sustainably constructed art and dance studios gave him a piece of paper and crayons. While she questioned my wife and me about our work, our son drew a yellow circle over a green squiggle.
"Dave, stop. Stop, will you? Stop, Dave. Will you stop, Dave?” So the supercomputer HAL pleads with the implacable astronaut Dave Bowman in a famous and weirdly poignant scene toward the end of Stanley Kubrick’s 2001: A Space Odyssey. Bowman, having nearly been sent to a deep-space death by the malfunctioning machine, is calmly, coldly disconnecting the memory circuits that control its artificial “ brain. “Dave, my mind is going,” HAL says, forlornly. “I can feel it. I can feel it.”
I can feel it, too. Over the past few years I’ve had an uncomfortable sense that someone, or something, has been tinkering with my brain, remapping the neural circuitry, reprogramming the memory. My mind isn’t going—so far as I can tell—but it’s changing. I’m not thinking the way I used to think. I can feel it most strongly when I’m reading. Immersing myself in a book or a lengthy article used to be easy. My mind would get caught up in the narrative or the turns of the argument, and I’d spend hours strolling through long stretches of prose. That’s rarely the case anymore. Now my concentration often starts to drift after two or three pages. I get fidgety, lose the thread, begin looking for something else to do. I feel as if I’m always dragging my wayward brain back to the text. The deep reading that used to come naturally has become a struggle.
The country is offering citizenship to Jews whose families it expelled in the 15th century.
The clock is ticking down on one of the world’s most unusual immigration proposals—Spain’s offer of citizenship to Jews whose families it expelled more than 500 years ago.
In 1492, the year Christopher Columbus set sail, Spain’s Edict of Expulsion gave Jews a stark choice: Convert, depart, or die. At the time, Spain’s Jewish community was one of the largest in the world, though their numbers had diminished due to a series of massacres and mass conversions 100 years earlier. Jews had lived on the Iberian Peninsula for more than 1,700 years, producing philosophers, poets, diplomats, physicians, scholars, translators, and merchants.
Historians still debate the number of Jews expelled; some estimate 40,000, others 100,000 or more. Those who fled sought exile in places that would have them—Italy, North Africa, the Netherlands, and eventually the Ottoman empire. Many continued to speak Ladino, a variant of 15th-century Spanish, and treasure elements of Spanish culture. Tens of thousands stayed, but converted, and remained vulnerable to the perils of the Inquisition. How many Jews were killed remains unclear, but a widely accepted estimate is 2,000 people during the first two decades of the Inquisition, with thousands more tortured and killed throughout its full course.
Accepting the reality about the president’s disordered personality is important—even essential.
During the 2016 campaign, I received a phone call from an influential political journalist and author, who was soliciting my thoughts on Donald Trump. Trump’s rise in the Republican Party was still something of a shock, and he wanted to know the things I felt he should keep in mind as he went about the task of covering Trump.
At the top of my list: Talk to psychologists and psychiatrists about the state of Trump’s mental health, since I considered that to be the most important thing when it came to understanding him. It was Trump’s Rosetta stone.
I wasn’t shy about making the same case publicly. During a July 14, 2016, appearance on C-SPAN’s Washington Journal, for example, I responded to a pro-Trump caller who was upset that I opposed Trump despite my having been a Republican for my entire adult life and having served in the Reagan and George H. W. Bush administrations and the George W. Bush White House.
More comfortable online than out partying, post-Millennials are safer, physically, than adolescents have ever been. But they’re on the brink of a mental-health crisis.
One day last summer, around noon, I called Athena, a 13-year-old who lives in Houston, Texas. She answered her phone—she’s had an iPhone since she was 11—sounding as if she’d just woken up. We chatted about her favorite songs and TV shows, and I asked her what she likes to do with her friends. “We go to the mall,” she said. “Do your parents drop you off?,” I asked, recalling my own middle-school days, in the 1980s, when I’d enjoy a few parent-free hours shopping with my friends. “No—I go with my family,” she replied. “We’ll go with my mom and brothers and walk a little behind them. I just have to tell my mom where we’re going. I have to check in every hour or every 30 minutes.”
Those mall trips are infrequent—about once a month. More often, Athena and her friends spend time together on their phones, unchaperoned. Unlike the teens of my generation, who might have spent an evening tying up the family landline with gossip, they talk on Snapchat, the smartphone app that allows users to send pictures and videos that quickly disappear. They make sure to keep up their Snapstreaks, which show how many days in a row they have Snapchatted with each other. Sometimes they save screenshots of particularly ridiculous pictures of friends. “It’s good blackmail,” Athena said. (Because she’s a minor, I’m not using her real name.) She told me she’d spent most of the summer hanging out alone in her room with her phone. That’s just the way her generation is, she said. “We didn’t have a choice to know any life without iPads or iPhones. I think we like our phones more than we like actual people.”