Decision 2012 – The “A B Cs” of Polling (Part 2 of 3)

In the first installment, we had discussed the basics of polling and how to go about evaluating a poll. In this installment, we will “grade” the pollsters, using the 2010 midterm elections as our sample data.

 How did they do ?

As the Election season begins to heat up, numerous polls will be released. Since some polls are more accurate than others, it’s worth asking how the pollsters did. To answer this question, here is now we analyzed the data:

(1)   In the 2010 election cycle, we accumulated a significant amount of polling data. However, we are only interested in the polling that was conducted during the last week before Election Day. This means we looked at all polling that was released between Monday, October 25 and Monday, November 1 of the midterm elections (Election Day was Tuesday November 2);

(2)   We only included polling conducted on statewide races for the U.S. Senate or Governor;

(3)   Finally, we excluded those pollsters who released less than 5 polls during that final week – we are only interested in the “majors.”

This criteria produced 10 pollsters, although two of those pollsters (Morning Call, which polled the Pennsylvania races; and the Sunshine State Poll, which only polled the Florida races) are arguably more limited in scope and should not be included in our rankings of “national” pollsters.

Since these pollsters typically conducted multiple polls for the same race during that week, we averaged their numbers for each race they polled. With this refined data, we then ranked/rated the pollsters in two ways: (1) The percentage of races they accurately polled (i.e, did they correctly predict the winner ?), and (2) for the races a pollster called correctly, how accurate were they ?

Test One: Win/Loss ratio

In this test, we counted all the races a “major” pollster was involved in, and tallied the number of races they correctly called. Using this method, Survey USA correctly called all 16 races they were involved in. Quinnipiac was accurate in 91% of the races they were involved in, while Fox was the least accurate – only 67% of the 15 races they polled were called correctly in the last week before the election.

Pollster Win/ Loss Ratio
Survey USA

100%

Quinnipiac

91%

Rasmussen

90%

Mason Dixon

88%

McClatchy/Marist

86%

Public Policy Polling

81%

CNN/Time

78%

FOX

67%

As a side note, it’s interesting to see where the pollsters correctly called their races – in this analysis, we looked at the races where there were at least 5 of the “major” pollsters were running polls that final week:

Race % Pollsters Predicting Winner
Governor California

100%

Governor Colorado

100%

Governor Ohio

100%

Governor Pennsylvania

100%

Senate California

100%

Senate Florida

100%

Senate Kentucky

100%

Senate Ohio

100%

Senate Pennsylvania

100%

Governor Florida

40%

Senate Washington

40%

Senate Colorado

20%

Senate Nevada

0%

One thing which should immediately be apparent is that pollsters are not infallible: there were four races where most of the pollsters’ last week polling was collectively wrong. In each case, those races were upsets or extremely close. It could also be argued that because each of the four races (Florida Governor, Washington Senate, Colorado Senate, and Nevada Senate) had substantial early voting, the pollsters did not have his demographic built into their polling models.

Test Two: Accuracy

There are two expectations a person has when reading a poll: (1) that the poll correctly predicts the winner, and (2) that the poll is reasonably accurate. We have already discussed the accuracy (in terms of “win loss”) of the major pollsters; now we would like to evaluate the accuracy of these same pollsters. For this analysis, we looked at how close a “major” pollster was with predicting the winning candidate’s percentage for races where they correctly called the winner.

Pollster Margin of Error Win/ Loss Ratio
Public Policy Polling

1.3

81%

CNN/Time

1.4

78%

McClatchy/Marist

1.7

86%

Rasmussen

2.0

90%

FOX

3.3

67%

Quinnipiac

3.4

91%

Survey USA

3.7

100%

Mason Dixon

6.4

88%

 Curiously, those pollsters who correctly predicted the winners/losers the most often weren’t necessarily those who predicted the winning candidate’s percentage the most accurately. In fact, Rasmussen was the only pollster in the “Final Four” both for win/loss ratio and for accuracy. Conversely, Fox was in the “Bottom Four” for both metrics. The other pollsters (Survey USA, Quinnipiac, Mason-Dixon, McClatchy/Marist, Public Policy Polling, and CNN) were in the middle of the pack.

Upcoming

In the next installment of this article, we will begin our analysis of the 2012 elections from a polling perspective, for the simple reason that both sides will be spinning poll results the best they can for their candidates, but we believe it’s important to get the entire story from a multitude of polls, as opposed to having a single poll tell the story of how a race is progressing.