Notes

First Drafts, Conversations, Stories in Progress

Question Your Answers
Show Description +

Since 1857, The Atlantic has been challenging established answers with tough questions. In this video, actor Michael K. Williams—best known as Omar Little from The Wire—wrestles with a question of his own: Is he being typecast?

What are you asking yourself about the world and its conventional wisdom? We want to hear your questions—and your thoughts on where to start finding the answers: hello@theatlantic.com. Each week, we’ll update this thread with a new question and your responses.

Show 0 Newer Notes

The Case for Solitude

Zak Bickel / The Atlantic

Henry David Thoreau is something of a poster child for solitude. In his essay “Walking,” published just after his death in our June 1862 issue, Thoreau made the case “for absolute freedom and wildness … to regard man as an inhabitant, or a part and parcel of Nature, rather than a member of society”:

We should go forth on the shortest walk, perchance, in the spirit of undying adventure, never to return, prepared to send back our embalmed hearts only as relics to our desolate kingdoms. If you are ready to leave father and mother, and brother and sister, and wife and child and friends, and never see them again—if you have paid your debts, and made your will, and settled all your affairs, and are a free man—then you are ready for a walk.

Thoreau himself was “a genuine American weirdo,” as Jedediah Purdy recently put it, and solitude suited him: His relentless individualism irritated his friends, including Atlantic co-founder Ralph Waldo Emerson, who described Thoreau’s habit of contradicting every point in pursuit of his own ideals as “a little chilling to the social affections.” Emerson may have had Thoreau in mind when, in our December 1857 issue, he mused that “many fine geniuses” felt the need to separate themselves from the world, to keep it from intruding on their thoughts. Yet he questioned whether such withdrawal was good for a person, not to mention for society as a whole:

Thoreau in his second and final photographic sitting, August 1861 (Wikimedia)

This banishment to the rocks and echoes no metaphysics can make right or tolerable. This result is so against nature, such a half-view, that it must be corrected by a common sense and experience. “A man is born by the side of his father, and there he remains.” A man must be clothed with society, or we shall feel a certain bareness and poverty, as of a displaced and unfurnished member. He is to be dressed in arts and institutions, as well as body-garments. Now and then a man exquisitely made can live alone, and must; but coop up most men, and you undo them. …

When a young barrister said to the late Mr. Mason, “I keep my chamber to read law,”—“Read law!” replied the veteran, “’tis in the courtroom you must read law.” Nor is the rule otherwise for literature. If you would learn to write, ’tis in the street you must learn it. Both for the vehicle and for the aims of fine arts, you must frequent the public square. … Society cannot do without cultivated men.

Emerson concluded that the key to effective, creative thought was to maintain a balance between solitary reflection and social interaction: “The conditions are met, if we keep our independence, yet do not lose our sympathy.”

Four decades later, in our November 1901 issue, Paul Elmore More identified a radical sympathy in the work of Nathaniel Hawthorne, which stemmed, he argued, from Hawthorne’s own “imperial loneliness of soul”:

Hester Prynne, the lonely protagonist of Hawthorne’s The Scarlet Letter (Wikimedia)

His words have at last expressed what has long slumbered in human consciousness. … Not with impunity had the human race for ages dwelt on the eternal welfare of the soul; for from such meditation the sense of personal importance had become exacerbated to an extraordinary degree. … And when the alluring faith attendant on this form of introspection paled, as it did during the so-called transcendental movement into which Hawthorne was born, there resulted necessarily a feeling of anguish and bereavement more tragic than any previous moral stage through which the world had passed. The loneliness of the individual, which had been vaguely felt and lamented by poets and philosophers of the past, took on a poignancy altogether unexampled. It needed but an artist with the vision of Hawthorne to represent this feeling as the one tragic calamity of mortal life, as the great primeval curse of sin … the universal protest of the human heart.

Fast-forward a century, and what More described as “the solitude that invests the modern world” had only gotten deeper invested—while “the sense of personal importance” gained new narcissistic vehicles in the form of social-media tools that let us “connect” online while keeping our real, messy selves as private as we choose. Which is not a bad thing: In some ways, the internet looks like the perfect way to achieve Emerson’s ideal balance between independent thought and social engagement.

In our May 2012 issue, however, Steven Marche wondered if the rise of social media is making us lonely:

A considerable part of Facebook’s appeal stems from its miraculous fusion of distance with intimacy, or the illusion of distance with the illusion of intimacy. Our online communities become engines of self-image, and self-image becomes the engine of community. The real danger with Facebook is not that it allows us to isolate ourselves, but that by mixing our appetite for isolation with our vanity, it threatens to alter the very nature of solitude.

The new isolation is not of the kind that Americans once idealized, the lonesomeness of the proudly nonconformist, independent-minded, solitary stoic, or that of the astronaut who blasts into new worlds. Facebook’s isolation is a grind. What’s truly staggering about Facebook usage is not its volume—750 million photographs uploaded over a single weekend—but the constancy of the performance it demands. More than half its users—and one of every 13 people on Earth is a Facebook user—log on every day. Among 18-to-34-year-olds, nearly half check Facebook minutes after waking up, and 28 percent do so before getting out of bed. The relentlessness is what is so new, so potentially transformative. Facebook never takes a break. We never take a break.

Last year, Julie Beck wrote a popular piece centered on the question, “When Are You Really an Adult?” She went beyond the biological and legal answers to delve into the more subjective realms of culture and personal experience. The many markers of adulthood were then illustrated in the variety of stories we collected from readers—clustered around commonplace themes of financial independence, parenthood, and divorce, but also less common experiences such as losing a parent at a young age, rape, and dodging the wrath of a dictatorship.

This week, we posed a related question to readers: “When does childhood end?”—and, more interestingly, “When did you become an adult in your parents’ eyes?,” a version that adds a layer of subjectivity to an already subjective topic. Here’s a response from Terri:

When I was 11, my mother died. My father had become blind a few years before, from a rare form of glaucoma. He had no choice but to allow me to do things that are normally done by an adult, such as budgeting and paying bills, cooking and cleaning, and other various things. He had to talk to me in an honest way, and make me understand things and rely on my judgement in lots of matters. Other adults did too. I was never a child again after my mother died and my dad knew it.

Another reader’s mother also died at a pretty young age:

I became an adult when my mother died and my dad started dating four months later. I was 20 years old. Once he had a new woman in his life (whom he is still married to now) and essentially a new family, I was out. We had really started to be at odds the year before, when I had started to do things my way instead of his way. He had pretty much taken for granted that I could make it in this world without his advice or anything.

For this next reader, it was boarding school:

I’m not sure the end of childhood is the sort of thing that one can pinpoint; seems to me there were rather a number of distinct rites of passage. The first was when I went to boarding school, around age 10. When my parents dropped me off that first day, I knew I was on my own. Calling home to say they should come get you was not an option; my parents made this pretty clear, but it was not necessary. I knew.

Another reader had to go abroad to step out of childhood:

When I was an exchange student, my father came down to visit. There I was, living independently in a foreign country at 17. I could speak the language fluently and had to navigate us for him.

George also left the country to become an adult:

        Zak Bickel / The Atlantic

        Here’s how an Atlantic author answered that question in September 1858:

        Full of anticipations, full of simple, sweet delights, are these [childhood] years, the most valuable of [a] lifetime. Then wisdom and religion are intuitive. But the child hastens to leave its beautiful time and state, and watches its own growth with impatient eye. Soon he will seek to return. The expectation of the future has been disappointed. Manhood is not that free, powerful, and commanding state the imagination had delineated. And the world, too, disappoints his hope. He finds there things which none of his teachers ever hinted to him. He beholds a universal system of compromise and conformity, and in a fatal day he learns to compromise and conform.

        But it wasn’t until the 20th century that scientists began to seriously study child development. In our July 1961 issue, Peter B. Neubauer heralded “The Century of the Child”:

        Gone is the sentimental view that childhood is an era of innocence and the belief that an innate process of development continuously unfolds along more or less immutable lines. Freud suggested that, from birth on, the child’s development proceeds in a succession of well-defined stages, each with its own distinctive psychic organization, and that at each stage environmental factors can foster health and achievement or bring about lasting retardation and pathology. …

        Freudian psychology does not, as some people apparently imagine, provide a set of ready-made prescriptions for the rearing of children. … The complexity of the interactions between mother and child cannot be reduced to rigid formulas. Love and understanding cannot be prescribed, and if they are not genuinely manifested, the most enlightened efforts to do what is best for the child may not be effective.

        According to this view, children weren’t miniature adults, but they were preparing for adulthood. Growing up was a process that had to be managed by adults, which made the boundaries of childhood both more important and more nebulous.

        A few years later, in our October 1968 issue, Richard Poirier described the backlash to a wave of campus protests as “The War Against the Young.” He implored older adults to take young people’s ideas seriously:

        It is perhaps already irrelevant, for example, to discuss the so-called student revolt as if it were an expression of “youth.” The revolt might more properly be taken as a repudiation by the young of what adults call “youth.” It may be an attempt to cast aside the strangely exploitative and at once cloying, the protective and impotizing concept of “youth” which society foists on people who often want to consider themselves adults.

        What’s more, Poirier argued, idealism shouldn’t just be the province of the young:

        If young people are freeing themselves from a repressive myth of youth only to be absorbed into a repressive myth of adulthood, then youth in its best and truest form, of rebellion and hope, will have been lost to us, and we will have exhausted the best of our natural resources.

        But how much redefinition could adulthood handle? In our February 1975 issue, Midge Decter addressed an anxious letter to that generation of student revolutionaries, who—though “no longer entitled to be called children”—had not yet fulfilled the necessary rites of passage for being “fully accredited adults”:

        Why have you, the children, found it so hard to take your rightful place in the world? Just that. Why have your parents’ hopes for you come to seem so impossible of attainment?

        The initial wave of reader response to our question “Is a long life really worth it?” was overwhelming “meh, not so much.” But since then, many sexagenarians, septuagenarians, octogenarians, and nonagenarians have emailed more enthusiastic outlooks on old age. Here’s Jim:

        A thought-provoking discussion, but it really misses the key point. I turn 65 in a couple of months, but I don’t expect to “retire” at 65—or ever. I’m fit and healthy and having the greatest fun of my life at the head of a fast-growing business. In a quarter century, if still alive, I might have to slow down a bit, but there will still be something useful for me to do.

        The founding pastor of our church has poor hearing and is almost blind, but a few weeks ago he preached a great sermon to celebrate his 100th birthday. He still contributes in other ways as well.

        Not everyone can continue working, but there is a huge need for volunteers in areas that do not require physical agility. Unless totally senile—and that’s something that will never happen to most of us—we all have something to offer.

        Maggie is a quarter century older than Jim but has a very similar view:

        Life isn’t over because I’m not longer “useful.” I’m 90 and have spent the last decade trying to be okay with not always being the helping hand. Though my greatest joy has come from knowing I have touched another’s life by being helpful, I have to remember that I am still touching people’s lives as long as I am alive. I’m so pleasantly surprised that people want to be around me.

        I was pretty grim when I had to stop driving because a slight accident damaged the car beyond repair. My health also gave way and I was briefly hospitalized. It was a big adjustment. But now I am walking, exercising at the gym once a week, taking part in demonstrations, and forgetting about how old I am. I don’t see any other options.

        Living a long life seems the obvious goal for most people, and many of them, like Dylan Thomas, raged against the dying of the light. Others—like the transhumanists that Olga featured recently—want to transcend death entirely.

        But the vast majority of the readers who responded to our note asking “Is a Long Life Really Worth It?” answered “nope, not really.” Genie is in the “maybe” camp:

        Well, like most things, the answer is not a simple yes or no; it depends—on so many factors, some of which we can control (e.g. not smoking) and can’t control (e.g. our genetic make-up). If you’re in good health physically and have all your faculties and some purposeful work or hobby, or just something you really enjoyed doing, then maybe it might be a good idea to live a long life. But those are a lot of ifs.

        Another reader, John, looks to human connections:

        Health is essential to making survival good, but it also helps to have a caring partner, for companionship and support. I am biased, because at 81, I have my health and a good wife. I’d like to live past 100 if these conditions remain. But if I become disabled, chronically ill or alone, life is unlikely worth it.

        Rita has a bleaker outlook:

        Looking at my genetics, I’m starting to think I may live a long time. I’m not yet 70, but I can probably expect to go until 95 at least.

        This doesn’t fill me with joy. Who’s going to look after me when my eyesight starts to crap out and I get weaker? Where’s the money going to come from to continue to pay my bills? These are not minor questions. Their answers, as far as I can see, are “nobody” and “nowhere.”

        And anyway, it’s not as if I can look forward to hiking in the desert or exploring foreign cities in my extreme old age. Nor will many of us be directing films or conducting research in our nineties. What most of us can anticipate is day after day staring at a TV set, wondering if anyone is coming for a visit.

        That’s the question that reader John Harris has been asking himself lately. He’s not alone: In 1862, one of The Atlantic’s founders, Ralph Waldo Emerson, wondered the same thing about aging. Acknowledging that “the creed of the street is, Old Age is not disgraceful, but immensely disadvantageous,” Emerson set out to explain the upsides of senescence. A common theme is the sense of serenity that comes with age and experience:

        Youth suffers not only from ungratified desires, but from powers untried, and from a picture in his mind of a career which has, as yet, no outward reality. He is tormented with the want of correspondence between things and thoughts. … Every faculty new to each man thus goads him and drives him out into doleful deserts, until it finds proper vent. … One by one, day after day, he learns to coin his wishes into facts. He has his calling, homestead, social connection, and personal power, and thus, at the end of fifty years, his soul is appeased by seeing some sort of correspondence between his wish and his possession. This makes the value of age, the satisfaction it slowly offers to every craving. He is serene who does not feel himself pinched and wronged, but whose condition, in particular and in general, allows the utterance of his mind.

        By 1928, advances in medicine had made it more possible to take a long lifespan for granted. In an Atlantic article titled “The Secret of Longevity” (unavailable online), Cary T. Grayson noted that “probably at no other time in the history of the human race has so much attention been paid to the problem of prolonging the span of life.” He offered a word of warning:

        Any programme which has for its object the prolongation of life must also have, accompanying this increased span of life, the ability of the individual to engage actively and with some degree of effectiveness in the affairs of life. Merely to live offers little to the individual if he has lost the ability to think, to grieve, or to hope. There is perhaps no more depressing  picture than that of the person who remains on the stage after his act is over.

        On the other hand, as Cullen Murphy contended in our January 1993 issue, an eternity spent with no decrease in faculties wouldn’t necessarily be desirable either:

        There are a lot of characters in literature who have been endowed with immortality and who do manage to keep their youth. Unfortunately, in many cases nobody else does. Spouses and friends grow old and die. Societies change utterly. The immortals, their only constant companion a pervading loneliness, go on and on. This is the pathetic core of legends like those of the Flying Dutchman and the Wandering Jew. In Natalie Babbitt’s Tuck Everlasting, a fine and haunting novel for children, the Tuck family has inadvertently achieved immortality by drinking the waters of a magic spring. As the years pass, they are burdened emotionally by an unbridgeable remoteness from a world they are in but not of.

        Since antiquity, Murphy wrote, literature has had a fairly united stance on immortality: “Tamper with the rhythms of nature and something inevitably goes wrong.” After all, people die to make room for more people, and pushing lifespans beyond their ordinary limits risks straining resources as well as reshaping families.

        Charles C. Mann examined some of those potential consequences in his May 2005 Atlantic piece “The Coming Death Shortage,” predicting a social order increasingly stratified between “the very old and very rich on top … a mass of the ordinary old … and the diminishingly influential young.” Presciently, a few years before the collapse of the real-estate bubble that wiped out millions of Americans’ retirement savings, Mann outlined the effects of an increased proportion of older people in the workforce:

        Women work together at an internet cafe in Kabul, Afghanistan, on March 8, 2012. Mohammad Ismail / Reuters

        Is the internet helpful or hurtful to human creativity? I posed that question to the reader discussion group known as TAD, and the consensus seems to be: It’s both. It’s complicated. And naturally, it depends a lot on what form of creativity you’re talking about. Here’s how one reader sums it up:

        Because of the Internet I write more and receive feedback from people I know (on Facebook) and online strangers (on TAD and other platforms that use Disqus). I use it as a jumping-off place and resource for planning lessons for my high-school students in science.

        However, I don’t practice music as often as I used to.

        On a similar note, another reader confesses, “I draw less because I’m always on TAD”:

        As a sketch artist, I appreciate my ability to Google things I want to draw for a reference point, but that doesn’t make me more creative. I already had the image in my head and the ability to draw. I honed my skills drawing people the old fashioned way, looking at pictures in books or live subjects and practicing till my fingers were going to fall off.

        In my opinion, the internet also encourages people to copy the work of others that goes “viral” rather than creating something truly original. The fact that you can monetize that viral quality also makes it more likely that people will try to copy rather than create.

        That’s the same reason a third reader worries that “the internet has become stifling for creativity”:

        Maybe I am not looking in the right place, but most platforms seem to be more about reblogging/retweeting/reposting other people’s creations. Then there is the issue of having work stolen and credits removed.

        As another reader notes, “This is the central conflict of fan fiction”:

        It’s obviously creative. On the other hand, it is all based on blatant copying of another writer’s work. How much is this a huge expansion of a creative outlet, and how much is this actually people choosing to limit their own creativity by colonizing somebody else’s world rather than creating a new one?

        The fanfic debate is fascinating, and more readers expand on it here.

        For my part, I tend to think the internet has encouraged and elevated some amazing new forms of creativity based on reaction and re-creation, collaboration and synthesis. Take this delightful example:

        Those creative forms are a big part of my job too: When I go to work, I’m either distilling my colleagues’ articles for our Daily newsletter or piecing together reader emails for Notes, and those curatorial tasks have been exciting and challenging in ways that I never expected. But I’ve also missed writing fiction and poetry and literary criticism, and I worry sometimes that I’m letting those creative muscles atrophy. If you’re a fanfic reader or writer (or videographer, or meme-creator, or content-aggregator) and would like to share your experience, please let us know: hello@theatlantic.com.

        This next reader speaks up for creativity as “the product of synthesis”:

        It’s not so much a quest for pure “originality,” as it is a quest for original perspectives or original articulations. I’d say that my creativity has been fueled by letting myself fall into occasional rabbit holes. Whether that’s plodding through artists I don’t know well on Spotify or following hyperlinks in a Wiki piece until I have forgotten about what it was that I initially wondered, that access to knowledge in a semi-random form triggers the old noggin like little else.

        On the other hand: So much knowledge! So many rabbit holes! Jim is paralyzed:

        What the internet does to the mind is something of an eternal question. Here at The Atlantic, in fact, we pondered that question before the internet even existed. Back in 1945, in his prophetic essay “As We May Think,” Vannevar Bush outlined how technology that mimics human logic and memory could transform “the ways in which man produces, stores, and consults the record of the race”:

        Presumably man’s spirit should be elevated if he can better review his shady past and analyze more completely and objectively his present problems. He has built a civilization so complex that he needs to mechanize his records more fully if he is to push his experiment to its logical conclusion and not merely become bogged down part way there by overtaxing his limited memory. His excursions may be more enjoyable if he can reacquire the privilege of forgetting the manifold things he does not need to have immediately at hand, with some assurance that he can find them again if they prove important.

        Bush didn’t think machines could ever replace human creativity, but he did hope they could make the process of having ideas more efficient. “Whenever logical processes of thought are employed,” he wrote, “there is opportunity for the machine.”

        Fast-forward six decades, and search engines had claimed that opportunity, acting as a stand-in for memory and even for association. In his October 2006 piece “Artificial Intelligentsia,” James Fallows confronted the new reality:

        If omnipresent retrieval of spot data means there’s less we have to remember, and if categorization systems do some of the first-stage thinking for us, what will happen to our brains?