In a political campaign, significant resources are spent on TV, direct mail, Internet, social media, and the like to communicate with voters. But how do campaigns know if these communications have been effective? The most effective (and objective) way to measure this is through periodic polling, which has the additional benefit of determining whether additional measures need to be taken if a campaign is losing (or not gaining enough) traction.
While polling has its role in a campaign, it is also a tool which is often misunderstood. This article will attempt to explain the basics of polling.
First, it’s important to understand what a poll is – and isn’t. Quite simply, a poll is a representative (and random) sample of voters taken at a given point in time. And in getting that sample, a pollster can either have live operators’ calling people (also known as “live operator polls”), or a pollster can use a prerecorded script to ask a series of questions (this is known as “automated polling” or “robo polling”). This is all that a poll is. It is the component parts, however, that create confusion and misunderstanding.
Part 1: Representative Sample
When contacting voters, it’s important that the sample represent the population as a whole. This means when constructing a “representative” sample, respondents must be selected randomly from the population. However, what constitutes “representative” is very subjective, because of the different types of samples.
The simplest way to sample a population (like all voters in Louisiana, or those living in a particular district) is to randomly dial a sample of phone numbers within that geographical area. However, this method has several flaws: (1) the respondent may not even be registered to vote (a recent Census estimate noted that there were about 3.5 million Louisianians who were of voting age, while the actual voter count at that time was 2.8 million), and (2) even if the respondent is a registered voter, he/she may not be eligible to vote in that jurisdiction for an upcoming election.
A slightly more restrictive method of getting a random sample is calling those actually registered to vote. This is a more commonly used method, although it too has its drawbacks: (1) the list needs to be current if we want accurate information – especially if we’re talking about an area where there is considerable turnover with the voter population, (2) since not everyone votes, getting the opinion of those who are extremely unlikely to show up for the election in question is (from an electoral standpoint) a waste of time. Granted, pollsters typically ask the respondent at the beginning how likely they are to vote to screen out unlikely voters, although in the experience of JMC Analytics and Polling, those screens only filter about 10-20% of respondents – and voter turnout is almost never in the 80-90% range.
Instead, JMC Analytics and Polling prefers to employ an even more restrictive method: only calling “likely voters.” However, instead of asking a voter how likely he/she is to vote, JMC determines voter likelihood from an independently verifiable source – that voter’s historical participation, which most voter files compile electronically.
There is another reason the author prefers to use only likely voters: from time to time, voters are lectured by political “experts” about their low participation rates in an election. A closer examination of voter file data, however, shows that the contents of a voter file calls into question whether all of the voters in the file are in fact “legitimate” voters. To illustrate, in the 2012 Presidential election, 68% of registered voters in Louisiana voted. Does that mean that 32% of the electorate stayed home? A close examination of a recent voter file shows that 15% of voters (or about 425K) have never voted, and another 12% of voters (or about 350K) last voted before the 2012 Presidential election. In other words, it can be argued that there are only 2.1 million demonstrably legitimate voters in Louisiana (2.9 million registered minus those who have never voted or who last voted before the 2012 Presidential elections), and as such, that 2.1 million figure (instead of 2.9 million) should be the denominator used when calculating voter turnout – which in this example would show a revised voter turnout in 2012 of about 95% (since about 2 million voters in Louisiana cast a ballot in the 2012 Presidential race). Therefore, polling those who never voted or who last voted before the 2012 Presidential election does not make logical sense.
Part 2: Given point in time
Another component of polling that can affect its relevance/accuracy is the fact that a poll measures voter opinion at a particular point in time. And so a poll is only truly accurate if conducted right before Election Day, because several things that can cause voter opinion to change: (1) massive movement (i.e., an electoral landslide) right before an election towards a candidate, (2) TV ads or mailers that move voters towards or against a candidate after a poll was conducted, and (3) partisan voters’ coming home in heavily Democratic/Republican areas. However, the practical reality is, campaigns can’t wait until right before Election Day to conduct a poll, so most polls conducted do have that inherent risk (of the numbers changing, that is).
Part 3: Evaluating a poll
Thus far, we have discussed the components of a poll. Given that poll releases are commonplace (especially for major races), it’s also worth being able to properly evaluate an individual poll when released:
(1) Who released the poll: In other words, was it a candidate/interest group (business, labor, etc.) who released numbers favorable to its campaign, or was it a poll conducted on behalf of a theoretically more objective party (like a newspaper, TV station, or magazine) ? And if the media outlet paid for the poll, which polling firm conducted the poll, and what reputation does that firm have in the community?
(2) When the poll was conducted: A poll conducted right after the work was done is a good assessment of voter opinion. However, a poll conducted several months ago but recently released may not be as relevant if the campaign is in full swing, because (a) of the “staleness” of the data, and (b) the further along you are in campaign season, the tempo of a campaign picks up, and voters can and do change their minds;
(3) The type of voter polled: Were registered or likely voters polled? Registered voters are a much broader universe than “likely” voters, and including those in the sample who are more occasional voters could skew the results one way or another;
(4) Trends are important: If the same pollster is polling the same race multiple times, any movement towards or against a candidate is worth noting, and tells more of a story than the results from one isolated poll. Particularly if this movement is echoed by other pollsters out in the field polling the same race.
Polling has a triple purpose: (1) evaluating where a candidate currently stands with the voters, (2) giving campaigns an objective tool to measure whether corrective measures and/or particular strategies need to be deployed, and (3) (if a candidate is doing well or has momentum) generating buzz for a campaign to create peer pressure for undecided voters to get behind that candidate. This article was written to explain the basics so that campaigns can be wise consumers of polling information.