These days you can listen to music everywhere all day long—while you’re reading, shopping for groceries, feeding the cat, or working in an office cubicle or open space, as 70 percent of us do today. Wherever we are during the 14-hour day that the average American spends on digital devices, we can get every piece of the world-historical repertoire, from Bach to rap, streamed to us through headphones and earbuds.

That means, of course, that we’re listening alone, too often distracted, turning something wonderful into white noise. With every song in history on our personal jukeboxes, our listening behavior turns a little jumpy. Almost half of us skip a song before it finishes, a quarter in the first five seconds. The universe of options can also lead us into a rut. As another study suggests, we’re becoming less and less likely to seek out new genres and artists.

article continues below

Audio Family Tree

In the last 150 years, music has changed radically and often. This timeline marks how we've recorded it, what hardware we’ve developed to hold it, and what's been happening in the world while we listen. The “Fidelity Index” number represents the amount of audio data (bit) per second, divided by 100,000, that each medium is capable of delivering to the listener.

Scroll →

The resurgent interest in vinyl records, the late lamented LP, is just one of the signs that something important got lost in the exchange for choice and mobility. All our most robustly joyous and most intimately poignant responses—breaking into dance or bursting into tears—happen when we let the music in, not when we use it to shut the world out. To understand that, and to find what was lost, requires going to the places where musical meaning is produced—in the recording studio and in the human brain—and to the heart of the audience, where it resides.

THE FALL & RISE OF SOCIAL LISTENING

Where Did All the People Go?

Forget the tinfoil cylinder or the symphony: Music was born when we stopped tearing each other apart. A group of seven studies converge in the finding that music is a product of our social evolution—our ability to make bonds of trust, friendship, and love. As that kill-or-be-killed instinct lost its primacy, we started gathering—in twos and threes at first, then in families, friendships, and tribes, then in larger societies—until, at some moment long lost to history, the human race sang its first song. From that moment, music was a social event, live and in person, even as it migrated over time from the campfire to the church to the concert hall.

According to Katherine Dacey, a professor at the Berklee College of Music with expertise in American history and musicology, the first time sound technology could get close to that real-world, communal experience was with the introduction of the vinyl LP, nearly 80 years after the first phonograph.

“This was one of the first times you had a mainstream, consumer audio format that reasonably approximated the experience of being in the concert hall,” she says. We naturally embraced the experience of hearing music at home, especially as quality increased.

The divide came in the 20th century. With car radios in the 1930s and transistor radios in the 1950s, listeners began having more musical options, and listening required less commitment. At the same time, advances in sound equipment kept raising the standard of quality. The ‘50s brought “hi-fi” equipment, inspiring the word “audiophile,” and sales soared. U.S. spending on audio equipment was fifty percent higher in 1960 than in 2000.

That was roughly the digital dawn, when we no longer had to seek out and sit down for music but could start carrying it along with us.

These days, thanks to streaming services, “we’re kind of awash in music,” says Dacey, and that has resulted in what she calls a “less invested…piecemeal” approach to listening. Nielsen’s Music 360 report for 2015 found that digital singles were selling at four times the rate of albums—the musical equivalent of, “Yes, we’re dating, but it’s not serious.”

That infidelity may explain the wish to settle down with music as we once did, in the company of others. In 2015, live events accounted for more than half of our music spending, online music communities have exploded, and the “vinyl revival” has lifted sales of physical albums to their highest level since 1988, making the turntable a gathering place once more. “When people come together and listen together,” says Dacey, it creates “some sense almost of ceremony,” a return to music’s deepest social roots.

A study conducted for Sonos, titled Music Makes it Home, found that households where music is played out loud spent an average of more than three hours more time together every week than those who listened by themselves or not at all. Those inclined to listen out loud also turned out to be almost twice as likely to have invited people over in the last week. Half of all participants said they played music out loud exactly because it would mean spending more time with others.

Dacey is confident that we can circle back to listening and enjoying the act of sharing it because it’s human nature. In fact, it turns out, we’re biologically hardwired for it.

article continues below

A Musical Map Of The Brain

Music activates every part of the brain, but different types provoke stronger reactions from some areas than others. See how three musical “moods” might affect both your brain and body.

Happy / Upbeat

Sad / Slow

Angry / Aggressive / Fast

solid blue brain drawing
solid blue brain drawing

Mesolimbic Pathway

Dopamine, the brain’s “happy chemical,” is released through this pathway when our brain picks out something pleasing in a piece of music.

Motor Cortex

With the right auditory elements, a piece of music can sync our brains to our bodies, prompting us to actually move.

solid blue brain drawing

Broca’s Area

This area is in charge of processing language, including making sense and emotional meaning of musical lyrics.

Amygdala

The center of the limbic system, which processes emotional stimuli, the amygdala will use a piece of music to signal that we should feel exhilarated, heartbroken, or anything in between.

Auditory Cortex

This is music’s first stop in the brain, processing basic components like melody, pitch, and tempo—all of which go on to affect our bodily rhythms.

Prefrontal Cortex

Linked to personality expression and short-term memory, this part of the brain can help determine whether or not we like a song we hear.

THE NEUROPHYSIOLOGY OF GOOSE BUMPS

…And How Does That Make You Feel?

Music is one of the few processes that activates every part of our brains, bringing our engagement with it down to the molecular level. Dr. Daniel Levitin, author of This is Your Brain on Music and a professor of cognitive psychology and music at McGill University, describes his discipline as “trying to understand where goose bumps come from.”

The short answer is that those “chills”—among other reactions music can inspireresult from a neuronal series firing off in hundredths of a second. The long answer is not entirely clear, but several studies link music to the brain’s mesolimbic pathway and the release of dopamine, which regulates a powerful combination of motivation, addiction, and reward.

The listening brain is also associated with the release of oxytocin, a hormone related to the sense of trust, comfort, and belonging. On the flip side, too much time spent listening to music alone has been linked to a higher risk of depression.

“This idea of somebody sitting alone and listening by themselves is really foreign to our history, to our biology,” says Dr. Levitin, which is why athletic events and political rallies start with music: “It’s not just pleasant. It’s actually binding together, in common chemical experience, the people who are there.”

Even though music brings us together chemically, our human responses to music are still far from universal. “One man’s Mozart might be another man’s Madonna,” as Dr. Levitin puts it, meaning that one person’s heart-stirring anthem might strike someone else as cloying manipulation or just noise.

That difference is in part the result of memory, which records the music’s tempo, pitch, and content for future playback (like those dreaded “earworms,” carriers of the annoying jingles that arise unwanted and seemingly at random). Those memories create a filter that can retrieve the associations and experiences that the song evoked before and lead to—wait for it—goose bumps.

Technology can do that too.

article continues below

From Studio To Speaker

There are numerous steps between pressing the record button and playing a piece of music back, and all of them can affect a listener’s experience. Here, choose your path through the recording process, and see what comes out.

Choose a Recording Method

Studio Mic

Phone Mic

Consumer Mic

Choose a Medium

MP3

Compact Disc

Compact Cassette

33 RPM LP

Choose a Speaker

HIFI Speaker

Bluetooth Speaker

Earbuds

Recording Method:

Medium:

Output:

Listen

Listen

Listen

Listen

Listen

Listen

Listen

Listen

Listen

Listen

Listen

Listen

Listen

Listen

Listen

Listen

Listen

Listen

Listen

Listen

Listen

Listen

Listen

Listen

Listen

Listen

Listen

Listen

Listen

Listen

Listen

Listen

Listen

Listen

Listen

Listen

SECRETS OF THE STUDIO

The Technology of Listening

“Sound is a special form of touch,” says Susan Rogers, a professor of music cognition and Dacey’s colleague at Berklee. “The actual acoustic waveform itself is contributing to the feeling of listening to music, and engineers know that.”

Rogers herself was a recording engineer, so she knows about sound as both an aesthetic and a physical force. “When we’re making a record,” she says, “we’re making a musical object and a sonic object at the same time,” both of which have concrete effects on listeners. A bass drop or drum line can be aesthetically pleasing, she says, when they stimulate beta rhythm, the electrical frequency of wakeful alertness in our brains—prompting, for example, the impulse to dance. “When [beta] hears drums, it hears something it likes,” she says. “It prepares us for action by synchronizing our movements to a pulse.”

At the same time, engineers can push us and our music in opposite directions. During the so-called “Loudness Wars” of the vinyl and CD eras, in an attempt to get radio DJs to play their records over a competitor’s, producers got engineers to crank the volume but compress songs’ dynamic range (uncompressed), which obliterated nuance so that a biting guitar riff could barely be distinguished from a cymbal crash. Volume kept us listening, but we heard less.

“Compression serves to manipulate the listener’s attention,” explains Rogers. It’s also a required feature of our otherwise “smart” digital devices and their most favored outputs: Computer speakers first, followed by headphones and earbuds.

These devices “actually allow us to listen longer but not deeper,” says Rogers—not only because they can cancel out those beta-friendly drum riffs but also because they introduce a level of distortion into the experience of listening in isolation.

Hilmar Lehnert, the audio engineering leader at Sonos, thinks that’s exactly the opposite of what good music technology should do. “The art of reproduction,” he says, “is to accommodate real-world constraints of real-world people—and still get as close as possible to the original experience.”

That “original” experience, he says, should feel not passive but participatory, and the job of the engineer is to enable “the conversation between the artist and the listener so that the music actually touches us.”

COMING FULL CIRCLE

Around the virtual campfire

“If you invest time in listening to a musical piece,” says Dr. Levitin, “and if you feel the same way at the end as you did at the beginning, I would say that that music has failed you.” In the world of headphones and earbuds, that failure is endemic.

Dacey recalls another time and place, which she describes as “four or five friends around a turntable and listening to a record that one person had just bought. We miss those kinds of rituals.” Like our ancestors gathered around a fire at night, we feel music best when we hear it out loud and together. A complex alchemy of neurochemistry, engineering, and human evolution, the magic that music conjures has no single, fixed location, but when conditions are right, as Rogers puts it, “you close your eyes and you’re right there.”

Tapping into that shared wellspring of sensation, memory, and emotion, we hear things we cannot quite explain and perhaps don’t want to: a tug at the heartstrings, an intense bloom of feeling seemingly out of nowhere. At worst, it can be a world of pain. At best, it can be everything we’re looking for.