The 2020 Presidential Election was historic for many reasons. Not only was the election notable for being one of the most polarizing in modern politics, but it was also conducted during a pandemic that required a high proportion of early and mail-in voting. The election was also significant for an industry-wide failure in polling. To understand why so much of the polling was askew, I along with other members of the polling industry, academia, non-profit sector, and the media were tasked with finding out what went wrong.
In late July, our report from the American Association of Public Opinion Research was released after a great deal of time and analysis. Overall, we found that the average state-level presidential poll overstated Biden’s support relative to Trump by 4.3 percentage points. In other words, the average poll with a 4-point Trump lead resulted in slightly more than an 8-point Trump win when the vote was certified and the average poll with a 4-point Biden lead was essentially tied.
The fact that this was the largest error since pollsters starting tracking state-level polling error caused many to rightfully worry about the accuracy of pre-election polls and how they are often used to drive stories and coverage. While such concerns may be justified, the results also highlight the importance of placing pre-election poll results in a larger context and empowering ourselves to be better consumers of the polling numbers we often encounter when following politics.
Pollsters who do pre-election polls have a very difficult task. This was especially true in 2020 where a global pandemic changed how many voters were able to vote. Not only do pollsters have to determine what the electorate is going to look like – something that we still do not know eight months after Election Day – we also need to figure out which of the people they are able to interview will vote.
As the United States has become more polarized in terms of where we live, what we think, and how much we trust various institutions, reaching voters has become increasingly challenging. There is no way to know if pollsters are correct in what they are assuming about the electorate – such as if there are not enough younger voters or too many rural voters. Furthermore, we often care most about close races where small errors can have large effects. In many ways, it is remarkable that the polling error in 2020 was as small as it was.
So, what does this mean for how we should think about pre-election poll results? First and foremost, pollsters must make a lot of adjustments and assumptions to produce the numbers – assumptions that can have large effects and which are often unstated. As a result, we should resist reading too much into the precise numbers being reported. Polls can tell us which races are likely to be close or not, but the actual margin can be off by enough to make a difference in close races. Moreover, if recent patterns hold, the polling error is likely to understate Republican support – meaning that polls showing slight Republican leads may be more likely to be correct than polls showing slight Democratic leads.
Secondly, because politics and pollsters change, it can be hard to use past performance to judge future results. We simply do not know how accurate pre-election polls in 2022 will be and whether they will continue to understate Republican support. If the polling error was caused by supporters of President Trump who were voting because of his candidacy then it is possible that the polls “self-correct” if Trump is not on the ballot. This is what we saw in 2018 after the polling error of 2016.
Alternatively, perhaps pollsters over-corrected in response to 2020 and ended up with results that overstate Republican support in 2022; however, it is impossible to know for sure.
Thirdly, more polls showing similar results should not necessarily increase our confidence. If the polls are all using the same mistaken assumptions about the electorate – or if they are all impacted by similar difficulties in reaching some voters – more polls won’t lead to more accurate results. Nearly every Michigan presidential poll conducted in the final month had at least a 7-point Biden lead, but the final margin was under three percentage points. The few polls that had a smaller Biden lead had Trump mistakenly winning the state.
So where does this leave us as voters and citizens trying to follow election coverage?
The only poll that really counts is the poll that takes place on Election Day. Never let a poll result discourage you from voting or participating as the assumptions of pollsters should never affect whether you choose to make your voice heard.
Think of polls as telling us where races are likely to be close or not and resist the temptation to read too much into poll results or small changes in poll results. For polls where the candidate lead is less than twice the so-called “margin of error” think of the results as being suggestive rather than predictive. They can be useful for helping us understand what voters say they are thinking, but only in an approximate way.
So why bother with polls? As long as we realize that polling results are suggestive rather than definitive, political polling has a very important role in politics because it is one of the few ways we have of understanding what citizens or voters are thinking and why. Elections can tell us who wins but not why. It matters whether voters were voting in favor of the winning candidate or against the losing candidate because whereas the former could credibly claim a “mandate” for action, the latter could not.
Even though politics is increasingly being waged on social media, it is well known that people active on social media have stronger, and more extreme, political views than ordinary citizens and voters. At the end of the day, polls certainly face challenges – and it is important to put the results in a larger context because of those challenges – but polit
ical polling remains one of the best ways that we have of trying to understand what the public is thinking and why.