Tourists visiting La Gomera and El Hierro in the Canary Islands can often hear locals communicating over long distances by whistling—not a tune, but the Spanish language. “Good whistlers can understand all the messages,” says David Díaz Reyes, an independent ethnomusicologist and whistled-language researcher and teacher who lives in the islands. “We can say, ‘And now I am making an interview with a Canadian guy.’”
The locals are communicating in Silbo, one of the last vestiges of a much more widespread use of whistled languages. In at least 80 cultures worldwide, people have developed whistled versions of the local language when the circumstances call for it. To linguists, such adaptations are more than just a curiosity: By studying whistled languages, they hope to learn more about how our brains extract meaning from the complex sound patterns of speech. Whistling may even provide a glimpse of one of the most dramatic leaps forward in human evolution: the origin of language itself.
Whistled languages are almost always developed by traditional cultures that live in rugged, mountainous terrain or in dense forest. That’s because whistled speech carries much farther than ordinary speech or shouting, says Julien Meyer, a linguist and bioacoustician at CNRS, the French national research center, who explores the topic of whistled languages in the 2021 Annual Review of Linguistics. Skilled whistlers can reach 120 decibels—louder than a car horn—and their whistles pack most of this power into a frequency range of 1 to 4 kilohertz, which is above the pitch of most ambient noise.
As a result, whistled speech can be understood up to 10 times as far away as ordinary shouting can, Meyer and others have found. That lets people communicate even when they cannot easily get close enough to shout. On La Gomera, for example, a few traditional shepherds still whistle to one another across mountain valleys that could take hours to cross.
Whistled languages work because many of the key elements of speech can be mimicked in a whistle, says Meyer. We distinguish one speech sound, or phoneme, from another by subtle differences in their sound-frequency patterns. A vowel such as a long e, for example, is formed higher in the mouth than a long o, giving it a higher sound. “It’s not pitch, exactly,” says Meyer. Instead, it’s a more complex change in sound quality, or timbre, which is easily conveyed in a whistle.
Consonants, too, can be whistled. A t, for example, is richer in high frequencies than k, which gives the two sounds a different timbre, and there are also subtle differences that arise from movements of the tongue. Whistlers can capture all of these distinctions by varying the pitch and articulation of their whistle, says Meyer. And the skill can be adapted to any language, even those that have no tradition of whistling. To demonstrate, Meyer whistles English phrases such as “Nice to meet you,” and “Do you understand the whistle?”
Learning to whistle a language you already speak is relatively straightforward. Díaz Reyes’s Spanish-language whistling students spend the first two or three months of the course learning to make a loud whistle with different pitches. “In the fourth or fifth month, they can make some words,” he says. “After eight months, they can speak it properly and understand every message.”
This articulation of speech within a whistle only works for nontonal languages, where the pitch of speech sounds isn’t crucial to the meaning of the word. (English, Spanish, and most other European languages are nontonal.) For tonal languages, in contrast, the meaning of a sound depends on its pitch relative to the rest of the sentence. In Chinese, for example, the syllable ma said with a steady high pitch means “mother,” but said with a pitch that dips and rises again, it means “horse.”
In ordinary tonal speech, the vocal cords make the pitch modulations that form the tones while the front of the mouth forms much of the vowel and consonant sounds. But not so for whistling, which doesn’t use the vocal cords. Whistlers of tonal languages thus face a dilemma: Should they whistle the tones, or the vowels and consonants? “In whistling, you can produce only one of the two. They have to choose,” says Meyer.
In practice, almost every whistled tonal language chooses to use pitch to encode the tones. For languages with a complex set of tones—such as Chinantec, a language in southern Mexico with seven tones (high, mid, low, falling high-low, falling mid-low, rising low-mid, and rising mid-high), or the equally complex Hmong language—pitch still gives enough information to carry meaning. But for simpler tonal languages—such as Gavião, an Amazonian language Meyer has studied, which has just two tones, low and high—whistlers must confine their conversations to a few stereotyped sentences that are easily recognized.
Even for nontonal languages, the whistled version of speech doesn’t contain as much frequency information as ordinary spoken language, but it does carry enough to recognize words. When researchers tested people’s comprehension of whistled Turkish, they found that experienced listeners correctly identified isolated words about 70 percent of the time; for words in common whistled sentences, the context helped to resolve ambiguities and the accuracy rose to approximately 80 to 90 percent.
In essence, people listening to whistled speech are piecing together its meaning from fragments of the full speech signal, just as all of us do when listening to someone at a crowded cocktail party. “Regular speech is so complex—there is so much redundant information,” says Fanny Meunier, a psycholinguist at CNRS who studies speech in noisy environments. “If we have noise, then we can choose different types of information that are present in different places in the signal.”
Linguists know surprisingly few details about how the brain does this. “We still don’t know what parts of the signal are useful to understand the message,” Meunier says. Most researchers who study this topic do so by deliberately degrading normal speech to see when listeners can no longer understand. But Meunier feels that whistling offers a less artificial approach. “With whistling, it was more like, let’s see what people did naturally to simplify the signal. What did they keep?” she says. The information crucial for understanding speech, she assumes, must lie somewhere within that whistled signal.
Meunier and her colleagues are just beginning this work, so she has few results to share yet. So far, they have shown that even people who have never heard whistled speech before can recognize both vowels and consonants with an accuracy well better than chance. Moreover, trained musicians do better than nonmusicians at recognizing consonants, with flute players better than pianists or violinists, Anaïs Tran Ngoc, a linguistics graduate student at Cote d’Azur University, in France, has found. Tran Ngoc, herself a musician, speculates that this is because flutists are trained to use sounds like t and k to help articulate notes crisply. “So there’s this link with language that might not be present for other instruments,” she says.
Whistled languages excite linguists for another reason, too: They share many features with what linguists think the first protolanguages must have been like, when speech and language first began to emerge during the dawn of modern humans. One of the big challenges of language is the need to control the vocal cords to make the full range of speech sounds. None of our closest relatives, the great apes, has developed such control—but whistling may be an easier first step. Indeed, a few orangutans in zoos have been observed to imitate zoo employees whistling as they work. When scientists tested one ape under controlled conditions, the animal was indeed able to mimic sequences of several whistles.
The context of whistled-language use also matches that likely for protolanguage. Today’s whistled languages are used for long-distance communication, often during hunting, Meyer notes. And the formulaic sentences used by whistlers of simple tonal languages are a close parallel to the way our ancestors may have used protolanguage to communicate a few simple ideas to their hunting partners—“Go that way,” for example, or “The antelope is over here.”
That doesn’t mean that modern whistled speech is a vestigial remnant of those protolanguages, Meyer cautions. If whistling preceded voiced speech, those earliest whistles wouldn’t have needed to encode sounds produced by the vocal cords. But today’s whistled languages do, which means they arose later, as add-ons to conventional languages, not forerunners of them, Meyer says.
Despite their interest to both linguists and casual observers, whistled languages are disappearing rapidly all over the world, and some—such as the whistled form of the Tepehua language in Mexico—have already vanished. Modernization is largely to blame, says Meyer, who points to roads as the biggest factor. “That’s why you still find whistled speech only in places that are very, very remote, that have had less contact with modernity, less access to roads,” he says.
Among the Gavião of Brazil, for example, Meyer has observed that encroaching deforestation has largely eliminated whistling among those living close to the frontier, because they no longer hunt for subsistence. But in an undisturbed village near the center of their traditional territory, whistling still thrives.
Fortunately, there are a few glimmers of hope. UNESCO, the UN cultural organization, has designated two whistled languages—Silbo in the Canary Islands, and a whistled Turkish used among mountain shepherds—as elements of the world’s intangible cultural heritage. Such attention can lead to conservation efforts. In the Canary Islands, for example, a strong preservation movement has sprung up, and Silbo is now taught in schools and demonstrated at tourist hotels. “If people don’t make that effort, probably Silbo would have vanished,” says Díaz Reyes. There, at least, the future of whistled language looks bright.
This post appears courtesy of Knowable Magazine.