Many national pollsters and research organizations are bound to be thrown under the Biden Bus or hit by the Trump Train – and some for good reasons. There’s still much we don’t know, so it’s too soon for a deep dive on better and worse methodologies, though worth mentioning, predictions made in California seem to have fared better, thanks in particular to the availability of high-quality voter file data. It’s reasonable to assume that some repeated mistakes made previously – while others may have overcorrected and inadvertently created new errors when overcompensating. It’s also likely that new techniques or innovations embraced invited new biases.

Rather than veer off into academic discourses that might interest some, but will bore most, it’s more valuable to consider that polling is both a science and an art.  

Fundamentally, polling is conducted either TO SHOW or TO KNOW. 

Publicly released research most commonly fits in the category of polling “to show.” Any conversation underway on ‘how did the poll/pollsters get it so wrong (again!)’ is primarily focused on the polls “to show.” 

Saying a poll is “to show” does not mean it is fake or inaccurate. The underlying research might be methodologically sound or it could be total garbage. That being said, it is useful to consider who sponsored the research. This is especially true with campaign polling – it’s reasonable to assume the campaign wants the findings released to aid their efforts in some way. 

People don’t want complexity – they want simplicity. The voting public, the media, and even seasoned campaign strategists find comfort in singular answers. For that reason, polling “to show” is often distilled to the simplest level such as “the horserace: who is winning.” 

This very well may be the source of the agita – and the agitprop – that stemmed from publicly released polls of 2020. 

By contrast, polling “to know” embraces complexity and uses it to simulate a range of realities.  

Research that focuses on “know” rather than “show” is the best way to achieve results – which happens to be why my firm, Core Decision Analytics (CODA), chose “Know Why… Know How” as our slogan. 

When polling “to know,” one of the most common levers toggled is “turnout”: what will the electorate look like? In California, we consider many factors available from high-quality voter file data from sources such as PDI (Political Data Inc.,) including traditional demographics such as gender, age, and ethnicity but also variables such as past vote history and – now more than ever – the mechanics of how one casts their ballot. I would argue that how ballots are cast will be the growing and critical storyline of the 2020 election. 

Turnout is really a question about WHO gets polled. While this challenge also exists in polls “to show,” it’s often overlooked in the quest to share a simple clear answer. The aspect that I think deserves more attention is WHAT pollsters ask.  

Take, for example, a standard California legislative contest. Imagine we want to know the “state of the race.” There’s no shortage of options of WHAT to ask. Assuming a standard two-party contest, we might ask “generic ballot (would you vote for “the Democratic or Republican candidate).” Or perhaps just the candidate names, such as “Bob Smith” or “Sally Jones.” But why not, “Democrat Bob Smith” or “Republican Sally Jones.” And shouldn’t we also layer in the “designations” that voters see on the ballot (for example qualifying that the hypothetical Bob Smith is a businessman whereas Sally Jones is an educator)? 

Each of these different – and valid – approaches is likely to yield a slightly different answer to the same general question. When understanding – and predicting – elections that often have razor thin margins – these nuances may make the difference. Layer in the true complexity and now you can see the power and value of polling “to know.” 

Polling “to know” allows us to understand these differences – multiplied by the richness of scenarios of WHO might vote. 

All this extra effort is often skipped in the snapshot of polling “to show.” But when properly leveraged in a poll “to know,” it unlocks a kaleidoscope with a range of possibilities – including the reality possible when the final voters are tallied.

Polling “to know” allows us to see the situation as it exists, but also as it could be. 

So, looking the future, what does all this mean? Public-facing polls “to show” might indeed be flawed and in need of serious repair. But don’t dismiss or discredit the value of polls “to know,” which strategists can continue to leverage to chart a course to victory. 

In other words, polling is not dead. 

 

Adam Rosenblatt is President of Core Decision Analytics (CODA), the opinion insights and data analytics firm within the Core Strategic Group, a premier California-based public affairs holding company. Adam provides actionable and trusted research-based guidance to political and corporate clients in California and nationwide. Learn more at coredecision.com