The years spent devising the study, crafting questions, creating and describing voter types, analyzing the implications, and writing up the findings—not to mention presenting them all over the country, including at party-nominating conventions in 1988, were exciting, challenging, satisfying, and more fun for me than anything else I have ever done professionally. Our group has gotten together periodically to reminisce and celebrate our friendship and our successes.
I like to think I played a part in it all. But the frank reality is that Andy Kohut was the driver here. Andy brought imagination, discipline, integrity, and indefatigability to the project. Don Kellermann and Times Mirror got the study and the Center going, and provided incredibly generous funding and backing—enabling us to do thousands of face-to-face interviews, unheard of in this day and age. But the typology and design of the survey came from Andy, and the execution, making it a model survey among surveys, was a tribute to him.
The Times Mirror Center became a casualty of Times Mirror’s struggles in the new age of newspapers and media. It was not clear it would survive, until Rebecca Rimel and the Pew Charitable Trusts picked it up and transformed it in 1996 into the Pew Research Center on the People and the Press, now simply the Pew Research Center, with Andy at its helm. Along the way, Andy positioned the Center at the leading edge of research on global public opinion, as well as that of the United States.
It should come as no surprise to anyone sentient who follows American politics that polling itself is in crisis mode. Any fly-by-night hustler, or any ambitious consultant or institution, can hang up a shingle that says “Acme Polling,” release results, and have a good chance of having the results picked up by hungry news organizations that don’t care whether Acme Polling is any better than the Acme Company that outfitted Wile E. Coyote, much less equivalent to reputable surveys. A lot of surveys are done by partisans, and—surprise—release results that are stunningly favorable to their candidates or the companies or industries that pay them. For telephone surveys, response rates continue to drop (they are now below 9 percent). Most surveys can’t afford full and robust samples of cell phones to complement randomly dialed landlines. But the huge swaths of the population that no longer use landlines, the widespread revulsion against unsolicited phone calls, and the lack of regular availability of people at set hours have combined to create major problems for pollsters trying to conduct a survey overnight, or over a couple of days. As costs have gone up, many pollsters have turned to automated poll calls, adding to the unreliability and instability of the results they produce. Internet surveys may be the wave of the future, but have their own challenges. Other “pollsters” dispense with surveys and do “focus groups,” as if twenty or thirty people in a room are enough to divine public opinion. Then there is another reality, pointed out decades ago in a brilliant paper by my late mentor Phil Converse, that we often measure “non-opinions,” assuming that there is some knowledge base when respondents answer questions.