Reporter's Notebook

Question Your Answers
Show Description +

Since 1857, The Atlantic has been challenging established answers with tough questions. In this video, actor Michael K. Williams—best known as Omar Little from The Wire—wrestles with a question of his own: Is he being typecast?

What are you asking yourself about the world and its conventional wisdom? We want to hear your questions—and your thoughts on where to start finding the answers: hello@theatlantic.com. Each week, we’ll update this thread with a new question and your responses.

Show None Newer Notes

When a Long Life Is Too Much to Bear

Living a long life seems the obvious goal for most people, and many of them, like Dylan Thomas, raged against the dying of the light. Others—like the transhumanists that Olga featured recently—want to transcend death entirely.

But the vast majority of the readers who responded to our note asking “Is a Long Life Really Worth It?” answered “nope, not really.” Genie is in the “maybe” camp:

Well, like most things, the answer is not a simple yes or no; it depends—on so many factors, some of which we can control (e.g. not smoking) and can’t control (e.g. our genetic make-up). If you’re in good health physically and have all your faculties and some purposeful work or hobby, or just something you really enjoyed doing, then maybe it might be a good idea to live a long life. But those are a lot of ifs.

Another reader, John, looks to human connections:

Health is essential to making survival good, but it also helps to have a caring partner, for companionship and support. I am biased, because at 81, I have my health and a good wife. I’d like to live past 100 if these conditions remain. But if I become disabled, chronically ill or alone, life is unlikely worth it.

Rita has a bleaker outlook:

Looking at my genetics, I’m starting to think I may live a long time. I’m not yet 70, but I can probably expect to go until 95 at least.

This doesn’t fill me with joy. Who’s going to look after me when my eyesight starts to crap out and I get weaker? Where’s the money going to come from to continue to pay my bills? These are not minor questions. Their answers, as far as I can see, are “nobody” and “nowhere.”

And anyway, it’s not as if I can look forward to hiking in the desert or exploring foreign cities in my extreme old age. Nor will many of us be directing films or conducting research in our nineties. What most of us can anticipate is day after day staring at a TV set, wondering if anyone is coming for a visit.

The initial wave of reader response to our question “Is a long life really worth it?” was overwhelming “meh, not so much.” But since then, many sexagenarians, septuagenarians, octogenarians, and nonagenarians have emailed more enthusiastic outlooks on old age. Here’s Jim:

A thought-provoking discussion, but it really misses the key point. I turn 65 in a couple of months, but I don’t expect to “retire” at 65—or ever. I’m fit and healthy and having the greatest fun of my life at the head of a fast-growing business. In a quarter century, if still alive, I might have to slow down a bit, but there will still be something useful for me to do.

The founding pastor of our church has poor hearing and is almost blind, but a few weeks ago he preached a great sermon to celebrate his 100th birthday. He still contributes in other ways as well.

Not everyone can continue working, but there is a huge need for volunteers in areas that do not require physical agility. Unless totally senile—and that’s something that will never happen to most of us—we all have something to offer.

Maggie is a quarter century older than Jim but has a very similar view:

Life isn’t over because I’m not longer “useful.” I’m 90 and have spent the last decade trying to be okay with not always being the helping hand. Though my greatest joy has come from knowing I have touched another’s life by being helpful, I have to remember that I am still touching people’s lives as long as I am alive. I’m so pleasantly surprised that people want to be around me.

I was pretty grim when I had to stop driving because a slight accident damaged the car beyond repair. My health also gave way and I was briefly hospitalized. It was a big adjustment. But now I am walking, exercising at the gym once a week, taking part in demonstrations, and forgetting about how old I am. I don’t see any other options.

Zak Bickel / The Atlantic

Here’s how an Atlantic author answered that question in September 1858:

Full of anticipations, full of simple, sweet delights, are these [childhood] years, the most valuable of [a] lifetime. Then wisdom and religion are intuitive. But the child hastens to leave its beautiful time and state, and watches its own growth with impatient eye. Soon he will seek to return. The expectation of the future has been disappointed. Manhood is not that free, powerful, and commanding state the imagination had delineated. And the world, too, disappoints his hope. He finds there things which none of his teachers ever hinted to him. He beholds a universal system of compromise and conformity, and in a fatal day he learns to compromise and conform.

But it wasn’t until the 20th century that scientists began to seriously study child development. In our July 1961 issue, Peter B. Neubauer heralded “The Century of the Child”:

Gone is the sentimental view that childhood is an era of innocence and the belief that an innate process of development continuously unfolds along more or less immutable lines. Freud suggested that, from birth on, the child’s development proceeds in a succession of well-defined stages, each with its own distinctive psychic organization, and that at each stage environmental factors can foster health and achievement or bring about lasting retardation and pathology. …

Freudian psychology does not, as some people apparently imagine, provide a set of ready-made prescriptions for the rearing of children. … The complexity of the interactions between mother and child cannot be reduced to rigid formulas. Love and understanding cannot be prescribed, and if they are not genuinely manifested, the most enlightened efforts to do what is best for the child may not be effective.

According to this view, children weren’t miniature adults, but they were preparing for adulthood. Growing up was a process that had to be managed by adults, which made the boundaries of childhood both more important and more nebulous.

A few years later, in our October 1968 issue, Richard Poirier described the backlash to a wave of campus protests as “The War Against the Young.” He implored older adults to take young people’s ideas seriously:

It is perhaps already irrelevant, for example, to discuss the so-called student revolt as if it were an expression of “youth.” The revolt might more properly be taken as a repudiation by the young of what adults call “youth.” It may be an attempt to cast aside the strangely exploitative and at once cloying, the protective and impotizing concept of “youth” which society foists on people who often want to consider themselves adults.

What’s more, Poirier argued, idealism shouldn’t just be the province of the young:

If young people are freeing themselves from a repressive myth of youth only to be absorbed into a repressive myth of adulthood, then youth in its best and truest form, of rebellion and hope, will have been lost to us, and we will have exhausted the best of our natural resources.

But how much redefinition could adulthood handle? In our February 1975 issue, Midge Decter addressed an anxious letter to that generation of student revolutionaries, who—though “no longer entitled to be called children”—had not yet fulfilled the necessary rites of passage for being “fully accredited adults”:

Why have you, the children, found it so hard to take your rightful place in the world? Just that. Why have your parents’ hopes for you come to seem so impossible of attainment?

Last year, Julie Beck wrote a popular piece centered on the question, “When Are You Really an Adult?” She went beyond the biological and legal answers to delve into the more subjective realms of culture and personal experience. The many markers of adulthood were then illustrated in the variety of stories we collected from readers—clustered around commonplace themes of financial independence, parenthood, and divorce, but also less common experiences such as losing a parent at a young age, rape, and dodging the wrath of a dictatorship.

This week, we posed a related question to readers: “When does childhood end?”—and, more interestingly, “When did you become an adult in your parents’ eyes?,” a version that adds a layer of subjectivity to an already subjective topic. Here’s a response from Terri:

When I was 11, my mother died. My father had become blind a few years before, from a rare form of glaucoma. He had no choice but to allow me to do things that are normally done by an adult, such as budgeting and paying bills, cooking and cleaning, and other various things. He had to talk to me in an honest way, and make me understand things and rely on my judgement in lots of matters. Other adults did too. I was never a child again after my mother died and my dad knew it.

Another reader’s mother also died at a pretty young age:

I became an adult when my mother died and my dad started dating four months later. I was 20 years old. Once he had a new woman in his life (whom he is still married to now) and essentially a new family, I was out. We had really started to be at odds the year before, when I had started to do things my way instead of his way. He had pretty much taken for granted that I could make it in this world without his advice or anything.

For this next reader, it was boarding school:

I’m not sure the end of childhood is the sort of thing that one can pinpoint; seems to me there were rather a number of distinct rites of passage. The first was when I went to boarding school, around age 10. When my parents dropped me off that first day, I knew I was on my own. Calling home to say they should come get you was not an option; my parents made this pretty clear, but it was not necessary. I knew.

Another reader had to go abroad to step out of childhood:

When I was an exchange student, my father came down to visit. There I was, living independently in a foreign country at 17. I could speak the language fluently and had to navigate us for him.

George also left the country to become an adult:

        Zak Bickel / The Atlantic

        Henry David Thoreau is something of a poster child for solitude. In his essay “Walking,” published just after his death in our June 1862 issue, Thoreau made the case “for absolute freedom and wildness … to regard man as an inhabitant, or a part and parcel of Nature, rather than a member of society”:

        We should go forth on the shortest walk, perchance, in the spirit of undying adventure, never to return, prepared to send back our embalmed hearts only as relics to our desolate kingdoms. If you are ready to leave father and mother, and brother and sister, and wife and child and friends, and never see them again—if you have paid your debts, and made your will, and settled all your affairs, and are a free man—then you are ready for a walk.

        Thoreau himself was “a genuine American weirdo,” as Jedediah Purdy recently put it, and solitude suited him: His relentless individualism irritated his friends, including Atlantic co-founder Ralph Waldo Emerson, who described Thoreau’s habit of contradicting every point in pursuit of his own ideals as “a little chilling to the social affections.” Emerson may have had Thoreau in mind when, in our December 1857 issue, he mused that “many fine geniuses” felt the need to separate themselves from the world, to keep it from intruding on their thoughts. Yet he questioned whether such withdrawal was good for a person, not to mention for society as a whole:

        Thoreau in his second and final photographic sitting, August 1861 (Wikimedia)

        This banishment to the rocks and echoes no metaphysics can make right or tolerable. This result is so against nature, such a half-view, that it must be corrected by a common sense and experience. “A man is born by the side of his father, and there he remains.” A man must be clothed with society, or we shall feel a certain bareness and poverty, as of a displaced and unfurnished member. He is to be dressed in arts and institutions, as well as body-garments. Now and then a man exquisitely made can live alone, and must; but coop up most men, and you undo them. …

        When a young barrister said to the late Mr. Mason, “I keep my chamber to read law,”—“Read law!” replied the veteran, “’tis in the courtroom you must read law.” Nor is the rule otherwise for literature. If you would learn to write, ’tis in the street you must learn it. Both for the vehicle and for the aims of fine arts, you must frequent the public square. … Society cannot do without cultivated men.

        Emerson concluded that the key to effective, creative thought was to maintain a balance between solitary reflection and social interaction: “The conditions are met, if we keep our independence, yet do not lose our sympathy.”

        Four decades later, in our November 1901 issue, Paul Elmore More identified a radical sympathy in the work of Nathaniel Hawthorne, which stemmed, he argued, from Hawthorne’s own “imperial loneliness of soul”:

        Hester Prynne, the lonely protagonist of Hawthorne’s The Scarlet Letter (Wikimedia)

        His words have at last expressed what has long slumbered in human consciousness. … Not with impunity had the human race for ages dwelt on the eternal welfare of the soul; for from such meditation the sense of personal importance had become exacerbated to an extraordinary degree. … And when the alluring faith attendant on this form of introspection paled, as it did during the so-called transcendental movement into which Hawthorne was born, there resulted necessarily a feeling of anguish and bereavement more tragic than any previous moral stage through which the world had passed. The loneliness of the individual, which had been vaguely felt and lamented by poets and philosophers of the past, took on a poignancy altogether unexampled. It needed but an artist with the vision of Hawthorne to represent this feeling as the one tragic calamity of mortal life, as the great primeval curse of sin … the universal protest of the human heart.

        Fast-forward a century, and what More described as “the solitude that invests the modern world” had only gotten deeper invested—while “the sense of personal importance” gained new narcissistic vehicles in the form of social-media tools that let us “connect” online while keeping our real, messy selves as private as we choose. Which is not a bad thing: In some ways, the internet looks like the perfect way to achieve Emerson’s ideal balance between independent thought and social engagement.

        In our May 2012 issue, however, Steven Marche wondered if the rise of social media is making us lonely:

        A considerable part of Facebook’s appeal stems from its miraculous fusion of distance with intimacy, or the illusion of distance with the illusion of intimacy. Our online communities become engines of self-image, and self-image becomes the engine of community. The real danger with Facebook is not that it allows us to isolate ourselves, but that by mixing our appetite for isolation with our vanity, it threatens to alter the very nature of solitude.

        The new isolation is not of the kind that Americans once idealized, the lonesomeness of the proudly nonconformist, independent-minded, solitary stoic, or that of the astronaut who blasts into new worlds. Facebook’s isolation is a grind. What’s truly staggering about Facebook usage is not its volume—750 million photographs uploaded over a single weekend—but the constancy of the performance it demands. More than half its users—and one of every 13 people on Earth is a Facebook user—log on every day. Among 18-to-34-year-olds, nearly half check Facebook minutes after waking up, and 28 percent do so before getting out of bed. The relentlessness is what is so new, so potentially transformative. Facebook never takes a break. We never take a break.