Notes

First Drafts, Conversations, Stories in Progress

Question Your Answers
Show Description +

Since 1857, The Atlantic has been challenging established answers with tough questions. In this video, actor Michael K. Williams—best known as Omar Little from The Wire—wrestles with a question of his own: Is he being typecast?

What are you asking yourself about the world and its conventional wisdom? We want to hear your questions—and your thoughts on where to start finding the answers: hello@theatlantic.com. Each week, we’ll update this thread with a new question and your responses.

Show 2 Newer Notes

The Ever-Shifting End of Childhood

Zak Bickel / The Atlantic

Here’s how an Atlantic author answered that question in September 1858:

Full of anticipations, full of simple, sweet delights, are these [childhood] years, the most valuable of [a] lifetime. Then wisdom and religion are intuitive. But the child hastens to leave its beautiful time and state, and watches its own growth with impatient eye. Soon he will seek to return. The expectation of the future has been disappointed. Manhood is not that free, powerful, and commanding state the imagination had delineated. And the world, too, disappoints his hope. He finds there things which none of his teachers ever hinted to him. He beholds a universal system of compromise and conformity, and in a fatal day he learns to compromise and conform.

But it wasn’t until the 20th century that scientists began to seriously study child development. In our July 1961 issue, Peter B. Neubauer heralded “The Century of the Child”:

Gone is the sentimental view that childhood is an era of innocence and the belief that an innate process of development continuously unfolds along more or less immutable lines. Freud suggested that, from birth on, the child’s development proceeds in a succession of well-defined stages, each with its own distinctive psychic organization, and that at each stage environmental factors can foster health and achievement or bring about lasting retardation and pathology. …

Freudian psychology does not, as some people apparently imagine, provide a set of ready-made prescriptions for the rearing of children. … The complexity of the interactions between mother and child cannot be reduced to rigid formulas. Love and understanding cannot be prescribed, and if they are not genuinely manifested, the most enlightened efforts to do what is best for the child may not be effective.

According to this view, children weren’t miniature adults, but they were preparing for adulthood. Growing up was a process that had to be managed by adults, which made the boundaries of childhood both more important and more nebulous.

A few years later, in our October 1968 issue, Richard Poirier described the backlash to a wave of campus protests as “The War Against the Young.” He implored older adults to take young people’s ideas seriously:

It is perhaps already irrelevant, for example, to discuss the so-called student revolt as if it were an expression of “youth.” The revolt might more properly be taken as a repudiation by the young of what adults call “youth.” It may be an attempt to cast aside the strangely exploitative and at once cloying, the protective and impotizing concept of “youth” which society foists on people who often want to consider themselves adults.

What’s more, Poirier argued, idealism shouldn’t just be the province of the young:

If young people are freeing themselves from a repressive myth of youth only to be absorbed into a repressive myth of adulthood, then youth in its best and truest form, of rebellion and hope, will have been lost to us, and we will have exhausted the best of our natural resources.

But how much redefinition could adulthood handle? In our February 1975 issue, Midge Decter addressed an anxious letter to that generation of student revolutionaries, who—though “no longer entitled to be called children”—had not yet fulfilled the necessary rites of passage for being “fully accredited adults”:

Why have you, the children, found it so hard to take your rightful place in the world? Just that. Why have your parents’ hopes for you come to seem so impossible of attainment?

The initial wave of reader response to our question “Is a long life really worth it?” was overwhelming “meh, not so much.” But since then, many sexagenarians, septuagenarians, octogenarians, and nonagenarians have emailed more enthusiastic outlooks on old age. Here’s Jim:

A thought-provoking discussion, but it really misses the key point. I turn 65 in a couple of months, but I don’t expect to “retire” at 65—or ever. I’m fit and healthy and having the greatest fun of my life at the head of a fast-growing business. In a quarter century, if still alive, I might have to slow down a bit, but there will still be something useful for me to do.

The founding pastor of our church has poor hearing and is almost blind, but a few weeks ago he preached a great sermon to celebrate his 100th birthday. He still contributes in other ways as well.

Not everyone can continue working, but there is a huge need for volunteers in areas that do not require physical agility. Unless totally senile—and that’s something that will never happen to most of us—we all have something to offer.

Maggie is a quarter century older than Jim but has a very similar view:

Life isn’t over because I’m not longer “useful.” I’m 90 and have spent the last decade trying to be okay with not always being the helping hand. Though my greatest joy has come from knowing I have touched another’s life by being helpful, I have to remember that I am still touching people’s lives as long as I am alive. I’m so pleasantly surprised that people want to be around me.

I was pretty grim when I had to stop driving because a slight accident damaged the car beyond repair. My health also gave way and I was briefly hospitalized. It was a big adjustment. But now I am walking, exercising at the gym once a week, taking part in demonstrations, and forgetting about how old I am. I don’t see any other options.

Living a long life seems the obvious goal for most people, and many of them, like Dylan Thomas, raged against the dying of the light. Others—like the transhumanists that Olga featured recently—want to transcend death entirely.

But the vast majority of the readers who responded to our note asking “Is a Long Life Really Worth It?” answered “nope, not really.” Genie is in the “maybe” camp:

Well, like most things, the answer is not a simple yes or no; it depends—on so many factors, some of which we can control (e.g. not smoking) and can’t control (e.g. our genetic make-up). If you’re in good health physically and have all your faculties and some purposeful work or hobby, or just something you really enjoyed doing, then maybe it might be a good idea to live a long life. But those are a lot of ifs.

Another reader, John, looks to human connections:

Health is essential to making survival good, but it also helps to have a caring partner, for companionship and support. I am biased, because at 81, I have my health and a good wife. I’d like to live past 100 if these conditions remain. But if I become disabled, chronically ill or alone, life is unlikely worth it.

Rita has a bleaker outlook:

Looking at my genetics, I’m starting to think I may live a long time. I’m not yet 70, but I can probably expect to go until 95 at least.

This doesn’t fill me with joy. Who’s going to look after me when my eyesight starts to crap out and I get weaker? Where’s the money going to come from to continue to pay my bills? These are not minor questions. Their answers, as far as I can see, are “nobody” and “nowhere.”

And anyway, it’s not as if I can look forward to hiking in the desert or exploring foreign cities in my extreme old age. Nor will many of us be directing films or conducting research in our nineties. What most of us can anticipate is day after day staring at a TV set, wondering if anyone is coming for a visit.

That’s the question that reader John Harris has been asking himself lately. He’s not alone: In 1862, one of The Atlantic’s founders, Ralph Waldo Emerson, wondered the same thing about aging. Acknowledging that “the creed of the street is, Old Age is not disgraceful, but immensely disadvantageous,” Emerson set out to explain the upsides of senescence. A common theme is the sense of serenity that comes with age and experience:

Youth suffers not only from ungratified desires, but from powers untried, and from a picture in his mind of a career which has, as yet, no outward reality. He is tormented with the want of correspondence between things and thoughts. … Every faculty new to each man thus goads him and drives him out into doleful deserts, until it finds proper vent. … One by one, day after day, he learns to coin his wishes into facts. He has his calling, homestead, social connection, and personal power, and thus, at the end of fifty years, his soul is appeased by seeing some sort of correspondence between his wish and his possession. This makes the value of age, the satisfaction it slowly offers to every craving. He is serene who does not feel himself pinched and wronged, but whose condition, in particular and in general, allows the utterance of his mind.

By 1928, advances in medicine had made it more possible to take a long lifespan for granted. In an Atlantic article titled “The Secret of Longevity” (unavailable online), Cary T. Grayson noted that “probably at no other time in the history of the human race has so much attention been paid to the problem of prolonging the span of life.” He offered a word of warning:

Any programme which has for its object the prolongation of life must also have, accompanying this increased span of life, the ability of the individual to engage actively and with some degree of effectiveness in the affairs of life. Merely to live offers little to the individual if he has lost the ability to think, to grieve, or to hope. There is perhaps no more depressing  picture than that of the person who remains on the stage after his act is over.

On the other hand, as Cullen Murphy contended in our January 1993 issue, an eternity spent with no decrease in faculties wouldn’t necessarily be desirable either:

There are a lot of characters in literature who have been endowed with immortality and who do manage to keep their youth. Unfortunately, in many cases nobody else does. Spouses and friends grow old and die. Societies change utterly. The immortals, their only constant companion a pervading loneliness, go on and on. This is the pathetic core of legends like those of the Flying Dutchman and the Wandering Jew. In Natalie Babbitt’s Tuck Everlasting, a fine and haunting novel for children, the Tuck family has inadvertently achieved immortality by drinking the waters of a magic spring. As the years pass, they are burdened emotionally by an unbridgeable remoteness from a world they are in but not of.

Since antiquity, Murphy wrote, literature has had a fairly united stance on immortality: “Tamper with the rhythms of nature and something inevitably goes wrong.” After all, people die to make room for more people, and pushing lifespans beyond their ordinary limits risks straining resources as well as reshaping families.

Charles C. Mann examined some of those potential consequences in his May 2005 Atlantic piece “The Coming Death Shortage,” predicting a social order increasingly stratified between “the very old and very rich on top … a mass of the ordinary old … and the diminishingly influential young.” Presciently, a few years before the collapse of the real-estate bubble that wiped out millions of Americans’ retirement savings, Mann outlined the effects of an increased proportion of older people in the workforce:

Women work together at an internet cafe in Kabul, Afghanistan, on March 8, 2012. Mohammad Ismail / Reuters

Is the internet helpful or hurtful to human creativity? I posed that question to the reader discussion group known as TAD, and the consensus seems to be: It’s both. It’s complicated. And naturally, it depends a lot on what form of creativity you’re talking about. Here’s how one reader sums it up:

Because of the Internet I write more and receive feedback from people I know (on Facebook) and online strangers (on TAD and other platforms that use Disqus). I use it as a jumping-off place and resource for planning lessons for my high-school students in science.

However, I don’t practice music as often as I used to.

On a similar note, another reader confesses, “I draw less because I’m always on TAD”:

As a sketch artist, I appreciate my ability to Google things I want to draw for a reference point, but that doesn’t make me more creative. I already had the image in my head and the ability to draw. I honed my skills drawing people the old fashioned way, looking at pictures in books or live subjects and practicing till my fingers were going to fall off.

In my opinion, the internet also encourages people to copy the work of others that goes “viral” rather than creating something truly original. The fact that you can monetize that viral quality also makes it more likely that people will try to copy rather than create.

That’s the same reason a third reader worries that “the internet has become stifling for creativity”:

Maybe I am not looking in the right place, but most platforms seem to be more about reblogging/retweeting/reposting other people’s creations. Then there is the issue of having work stolen and credits removed.

As another reader notes, “This is the central conflict of fan fiction”:

It’s obviously creative. On the other hand, it is all based on blatant copying of another writer’s work. How much is this a huge expansion of a creative outlet, and how much is this actually people choosing to limit their own creativity by colonizing somebody else’s world rather than creating a new one?

The fanfic debate is fascinating, and more readers expand on it here.

For my part, I tend to think the internet has encouraged and elevated some amazing new forms of creativity based on reaction and re-creation, collaboration and synthesis. Take this delightful example:

Those creative forms are a big part of my job too: When I go to work, I’m either distilling my colleagues’ articles for our Daily newsletter or piecing together reader emails for Notes, and those curatorial tasks have been exciting and challenging in ways that I never expected. But I’ve also missed writing fiction and poetry and literary criticism, and I worry sometimes that I’m letting those creative muscles atrophy. If you’re a fanfic reader or writer (or videographer, or meme-creator, or content-aggregator) and would like to share your experience, please let us know: hello@theatlantic.com.

This next reader speaks up for creativity as “the product of synthesis”:

It’s not so much a quest for pure “originality,” as it is a quest for original perspectives or original articulations. I’d say that my creativity has been fueled by letting myself fall into occasional rabbit holes. Whether that’s plodding through artists I don’t know well on Spotify or following hyperlinks in a Wiki piece until I have forgotten about what it was that I initially wondered, that access to knowledge in a semi-random form triggers the old noggin like little else.

On the other hand: So much knowledge! So many rabbit holes! Jim is paralyzed:

What the internet does to the mind is something of an eternal question. Here at The Atlantic, in fact, we pondered that question before the internet even existed. Back in 1945, in his prophetic essay “As We May Think,” Vannevar Bush outlined how technology that mimics human logic and memory could transform “the ways in which man produces, stores, and consults the record of the race”:

Presumably man’s spirit should be elevated if he can better review his shady past and analyze more completely and objectively his present problems. He has built a civilization so complex that he needs to mechanize his records more fully if he is to push his experiment to its logical conclusion and not merely become bogged down part way there by overtaxing his limited memory. His excursions may be more enjoyable if he can reacquire the privilege of forgetting the manifold things he does not need to have immediately at hand, with some assurance that he can find them again if they prove important.

Bush didn’t think machines could ever replace human creativity, but he did hope they could make the process of having ideas more efficient. “Whenever logical processes of thought are employed,” he wrote, “there is opportunity for the machine.”

Fast-forward six decades, and search engines had claimed that opportunity, acting as a stand-in for memory and even for association. In his October 2006 piece “Artificial Intelligentsia,” James Fallows confronted the new reality:

If omnipresent retrieval of spot data means there’s less we have to remember, and if categorization systems do some of the first-stage thinking for us, what will happen to our brains?