JMC Analytics and Polling’s “A-B-Cs” of polling

In a political or issue campaign, substantial resources are spent on TV, direct mail, Internet advertising, social media, and the like to communicate its message to voters. But how can campaigns evaluate whether these communications were effective? That’s where polling comes in, and a properly constructed poll is the most effective (and objective) way to measure current (or shifts in) opinion. But while polling has its role in a campaign, it is also a tool which is often misunderstood. This article will attempt to “demystify” polling.

The Basics

First, it’s important to understand what a poll is – and isn’t. Quite simply, a poll is a representative (and random) sample of voters taken at (or over) a given point in time. And there are two common means of gathering that sample: (1) having live operators call people (also known as “live operator polls”), or (2) using a pre recorded script to ask questions, with respondents’ making the appropriate answer selection on his/her phone keypad (also known as “automated polling” or “robo polling”). There is technically a third method as well – gathering information over the Internet, although that is not yet a commonly used surveying method. Where the confusion about polling comes into play (in the opinion of the author) is in the component parts of the polling definition.

Part 1: Representative Sample

An accurate poll needs to represent the population (state, parish/county, city, district) being surveyed, and in selecting a sample of that population, the sampling has to be randomly performed – in other words, deliberately sampling only Republicans or residents of the city of New Orleans would improperly skew the sample results.

While the sample must be random, however, the way the sample universe is defined (in other words, who should/should not be called ?) is one where pollsters have considerably different philosophies.

The simplest way to sample a population (like all voters in a state, or voters living in a particular district) is to randomly dial a sample of phone numbers within that geographical area. However, this method has several flaws: (1) the respondent may not even be registered to vote (a recent Census estimate noted that there were about 3.5 million Louisianians of voting age, while the actual voter count is 2.9 million), and (2) even if the respondent is a registered voter, he/she may not be eligible to vote in that jurisdiction for an upcoming election.

A slightly more restrictive (but more accurate) method of getting a random sample is calling registered voters. This is a more commonly used method, although it similarly has drawbacks: (1) the voter list needs to be current if we want accurate information – especially if we’re talking about an area with considerable voter population turnover, like a fast growing suburb/state, (2) since not everyone votes, getting the opinion of those who are extremely unlikely to show up for the election in question is (from the author’s standpoint) a waste of time. Granted, pollsters typically ask the respondent at the beginning how likely they are to vote to screen out “unlikely” voters, although in the experience of JMC Analytics and Polling, those screens only filter about 10% of respondents – far too low to be an effective voter screen, since voter turnout is almost never 90%.

The third method (which JMC Analytics and Polling prefers) is even more restrictive: only calling “likely voters.” And in doing so, use a person’s voter history (which is typically something governmental jurisdictions compile) to make that determination, as opposed to determining voter likelihood from a poll question.

There is another reason the author prefers to use likely (as opposed to registered) voters in building a poll sample universe: from time to time, voters are lectured by political “experts” about their low participation rates in an election. A closer examination of voter file data, however, shows that the mantra of “low voter participation” falsely assumes a clean voter file: in the 2012 Presidential election, 68% of registered voters in Louisiana voted. Does that mean that 32% of voters stayed home? A close examination of a recent voter file shows that 16% (or about 476K) of voters have never voted, and another 11% (or about 312K) of voters last voted before the 2012 Presidential election. In other words, it can be argued that there are only 2.14 million demonstrably legitimate voters in Louisiana (2.93 million registered voters minus those who have never voted or who last voted before the 2012 Presidential elections), and as such, that 2.14 million figure (instead of 2.93 million) should be the denominator used when calculating voter turnout. And a 2.14 million voter “denominator” would show a revised 2012 voter turnout of 94% (about 2.01 million voters in Louisiana cast a ballot in the 2012 Presidential race). Therefore, polling those who never voted or who last voted before the 2012 Presidential election does not make logical sense.

Part 2: Given point in time

Another component of polling that can affect its relevance/accuracy is the time period over which the survey was conducted. In the perfect world, a poll conducted on Election Day would mirror the final election results. However, polls are not conducted in isolation, and results can be and are frequently influenced by external events. Similarly, communications/gaffes from a candidate/the candidate’s opponent/a special interest group can affect the poll results as well. The least ideal situation would be a poll that took two weeks to complete and was not officially released until weeks later – the quality of any information from such a poll release would be highly suspect (or, more likely, irrelevant).

Part 3: Evaluating a poll

Thus far, we have discussed the two components of a poll. Given that poll releases can drive the narrative for or against a campaign, it’s important to be able to objectively evaluate a poll release, and below are the criteria JMC Analytics and Polling uses:

(1) Who released the poll: In other words, was it a candidate/interest group (business, labor, etc.) releasing favorable numbers, or was it a poll conducted on behalf of a theoretically more objective party like a newspaper, TV station, or magazine ? Furthermore, does the polling organization releasing the poll historically have a reputation for quality and accuracy?

(2) When the poll survey was conducted: A poll released publicly right after the work was done is the “freshest” assessment of voter opinion, while a poll where the survey work was conducted several months ago but recently released would be suspect for containing “stale” information;

(3) The type of voter polled: Were registered or likely voters polled? Registered voters are a much broader universe (and, in the author’s opinion, less accurate) than “likely” voters;

(4) Trends are important: If a pollster has polled the same race multiple times, any movement towards or against a candidate is as (if not more) important as the numbers from the poll release itself. And if this “movement” is confirmed by other pollsters, it’s fairly safe to say that there is movement in the race.

Conclusion

Polling has a lot of power to shape the narrative of the campaign, since voters tend to support the (perceived) winner of a race. And as such, it is important to understand what they are all about, so that political observers/elected officials can be an educated “consumer” of polling information.