Decision 2012 – The “A B Cs” of Polling

2012 promises to be a busy election year for the various federal, state, and local offices on the ballot this year.  Part of that activity includes yard signs, political commercials, bumper stickers, and public opinion polls to be conducted for races big and small.

As annoying as polls are perceived to be, they do provide a relatively neutral narrative as to how a race is progressing. Additionally, politics is like sports in that people want to know who’s up or down at a given point in time. Therefore, it’s important to understand what polls are and how they work, since the reality is, polls can shape the trajectory of a race.

Polling – the basics

A poll is, quite simply, a representative sample of voters at a given point in time. What leads to different results is the fact that getting that “representative sample” is a very subjective process. 

In fact, it can be argued that a significant driver of the ultimate results is the content of that “representative sample.” The easiest way to sample (and the least accurate) is a list of phone numbers. However, this method immediately brings data quality issues to the forefront. Even assuming that the person at the other end of the line is a registered voter, can we be certain that the voter is eligible to vote in the election? For instance, elected officials not elected statewide are elected from a geographical area that varies in size from a handful of precincts to regions of the state. Without understanding the boundaries of the applicable election district, there is a risk that voters will be asked to participate in a poll that is essentially irrelevant to them. Plus, political campaigns have been embarrassed when they’ve sent mail to voters not living in a district.

Another way of sampling voters is use of a voter file to call ONLY those voters living in a district or state. While this method of polling eliminates nonvoters (or irrelevant voters) from the equation, there are disadvantages to this method as well: (1) if the voter list hasn’t been updated, voters who have recently registered (or who moved into a fast growing area) will not be sampled, (2) depending on the type of election (Presidential, statewide, special election, tax election), a pollster may be calling people who have no intention of voting in that particular election.

Finally, pollsters can limit their sample to “likely” voters. Even though this is a subjective term (for example, you have people who vote in every election as opposed to those who only vote in Presidential contests), the idea is to evaluate a voter’s history before deciding whether that voter should be included in a poll sample. Alternatively, pollsters can use a “prescreening” question to ask the respondent how likely he/she is to vote in the upcoming election. Those who are not sufficiently interested are not included in the results.

Therefore, when evaluating a poll release, it’s important to determine if the poll was of registered or likely voters, because those who are merely “registered” are less likely to show up at the polls and vote. In fact, in the state of Louisiana, 17% of those on the voter rolls have never voted, while another 11% last voted before the 2008 Presidential election. From the author’s standpoint, it would not make sense to poll these voters.

It’s also worth mentioning, too, that regardless of the method used to sample voters, there are data quality issues on the “back end” as well: not all demographics participate in a poll equally, and this reality can and does skew the results. To illustrate: the Louisiana electorate as of May 1, 2012 was approximately 30% black. If a poll sample were done where the respondents were 20% black, you have a poll that, depending on the election, may be improperly weighted towards a Republican or conservative candidate, and should not be considered a credible poll.

Ways to evaluate a poll

We have already mentioned above that the type of sample employed by a pollster is something that should be considered when evaluating a poll. There are additional factors which should be considered when critiquing a poll:

(1)    Who released the poll: In other words, was it a candidate/interest group (business, labor, etc.) who released numbers favorable to his/her campaign, or was it a (presumably) more objective media outlet (newspaper, TV station, magazine) paying for the poll ? And if the media outlet paid for the poll, which polling firm conducted the poll?

(2)    When the poll was conducted: A poll conducted two days ago is a very good assessment of a candidate’s strength/weakness. On the other hand, a poll conducted in January would pretty much be useless, because (a) of the “staleness” of the data, and (b) the further along you are in campaign season, the tempo of a campaign picks up, and voters can and do change their minds;

(3)    Trends are important: If the same pollster is polling the race multiple times, and there is clear movement towards a candidate, that trend is important to note. Similarly, if multiple pollsters polling at the same time are showing movement towards a candidate, that “collective movement” is also worth noting;

Our method of analyzing polls

We believe that a collection of polls on a given race tells a more complete story than an individual poll. Therefore, as the election season progresses, we periodically review the polling taken on Presidential and Congressional races, and we load this data into a database. We then take an average of all polls for that race over a period of time. Right now, we examine the last 28 days of polling, but closer to Election Day, that “window” shrinks to 14, or even 7 days of polling, since public opinion becomes more fluid the closer you get to Election Day.

Upcoming

In the next installment of this article, we will discuss the performance of pollsters in several selected races in the 2010 election cycle, and we will begin reporting on polling that is already underway for the 2012 election cycle.