In the world of Dr. Seuss’s The Lorax, to hear the tale of the Lorax, you have pay the “Once-ler,” after which, whispering to you through a “snergelly” hose, he will paint you a word-picture of “truffula” trees, and “Bar-ba-loots” frolicking around, and the horrible garments called “thneeds” that started a path of environmental destruction.

Seuss is known for liberally peppering his stories with such nonsense words, which gave them their trademark silliness, and more opportunities for rhymes.

Chris Westbury, a professor of psychology at the University of Alberta, discovered a fairly Seuss-ian word himself when he and his colleagues were administering a lexical decision task (a test to see how quickly people can identify strings of letters as either words or non-words). They noticed that people always laughed when they saw the non-word “snunkoople.”

That got them wondering—was there something in particular about nonsense words that made them funny? If so, could it be measured?

Turns out there is and it can, according to a new study by Westbury and other researchers from the University of Alberta and the University of Tübingen in Germany.

“I was originally going to call the paper ‘The Snunkoople Effect,’” Westbury says. Instead, it’s called “Telling the world’s least funny jokes: On the quantification of humor as entropy.” (Whether nonsense words are indeed the least funny jokes possible is debatable.)

The entropy in question here is informational entropy, or “Shannon entropy,” which is “a way of measuring uncertainty in a signal,” Westbury says. “We treat the words as a signal.” It depends on how commonly letters are used in the English language. Nonsense words are lower in entropy the more uncommon letters they contain.

After first giving people a survey to make sure that they did, generally, find nonsense words funny, the researchers had them look at pairs of nonsense words, created by a computer program. The subjects had to say which word was funnier, and they had to rate them on a one-to-seven scale of not-funny to funny.

What they found is that the higher the words were in entropy—the weirder they were—the more the subjects found them funny. And while it’s probably not surprising that people found weird words funny, the interesting thing is that there was a pretty linear relationship between how unusual the word was, and how humorous it was, and based on that, the researchers were able to predict how funny people would find different words.

With one small snag—some of the nonsense words the computer spat out were slang words, or closely related to already-existing dirty words. For example, five of the words people rated as the funniest were “whong,” “dongl,” “shart,” “focky,” and “clunt.” But the effect persisted even after the researchers dealt with that issue. (Some of the funniest, lowest-entropy words that didn’t accidentally evoke any body parts or bodily fluids were “quingel,” “probble,” “finglam,” and “subvick.”)

This concept is something that the king of nonsense words—Dr. Seuss—seemed to intuitively understand. The researchers took 65 made-up words from Dr. Seuss’s books—like “wumbus” and “yuzz-a-ma-tuzz”—and ran them through the entropy formula. They found that Dr. Seuss’s made-up words were reliably lower in entropy than regular English words.

The results of this study fit in with a prominent scientific understanding of humor—that things are funny when they violate our conceptual expectations. But nonsense words, being a fairly simple unit of humor, offer the opportunity to mathematically measure that idea. Most humor in the real world is too complicated, and depends on too many factors, to be able to do that. You could probably do the same thing with funny phrases, Westbury says—the example the study gives is that “existential llama” is funnier than “angry llama” because the first set of two words are paired together less frequently.

But “if you start a joke with, ‘A priest a rabbi and a monk walk into the bar,’ we know that’s weird, but how weird it is, we don’t know,” Westbury says. “Most jokes depend upon real world probabilities that are almost impossible to calculate.”