Inaccurate Political Polling in The USA

BlueMoonAcrossThePond

Well-Known Member
Joined
27 Oct 2020
Messages
5,221
Team supported
Manchester City
Political polling in the USA seems to be much less accurate in the past decade or so than it has been in more distant past.

In 2016, nearly all polls indicated that Hillary Clinton would win election over Trump, easily so. In the wake of the 2016 polling debacle, the American Association for Public Opinion Research, AAPOR, the nation’s leading organization of survey researchers, released a report attempting to identify why pollsters got the election outcome so wrong.

This article reports on an interview with Courtney Kennedy, the Pew Research Center Director of Survey Research shortly following the 2016 election. It's interesting reading.

Among the points that the interview brought up:

* While the national polls generally came pretty close to the actual nationwide popular vote (which Clinton won by 2.1 percentage points over Trump), the performance of polls at the state level – where presidential elections actually are decided – was a lot spottier.

* In 2016 the polls underestimated Trump's performance and Kennedy was pressed to explain this outcome. She replied that: "Another interesting finding had to do with poll respondents’ level of education. A number of studies have shown that in general, people with higher levels of formal education are more likely to take surveys – it’s a very robust finding."

* Another point that Kennedy made: "There’s lots of evidence to show that the resources that news organizations have for polling seem to be declining over time, and that does two things, I think: There are fewer news organizations doing polling, and those that do – particularly local news organizations – are using very low-cost methodology"

Fast forward to 2024 and polls now are even less accurate than they were in 2016. But why?

Some cite the fact that polling is now done over the phone - either using cell phone or landline to contact participants. In a video I can't seem to find (I think it was a Meidas Touch episode), one of the Lincoln Project founders stated that he had no confidence in polls for this reason - namely that those participating in polls would have to be are persons who do not screen their calls and are willing to talk to a complete stranger, answering questions about politics for 30 minutes or more.

Fair enough - but this explanation seems pat - why would this favor one candidate over another (one might well argue that older individuals would be more likely to answer cold calls, granted, and that this would skew results towards Republicans - but why can't this be factored and adjusted for?) and why can't pollsters calibrate their data to account for whatever bias this might introduce. Indeed, we have Kennedy's finding from 2016 which points in the opposite direction - "Another interesting finding had to do with poll respondents’ level of education. A number of studies have shown that in general, people with higher levels of formal education are more likely to take surveys – it’s a very robust finding."

I haven't found any plausible explanation about why polls are consistently inaccurate and moreover why they consistently underestimate - at least in local/or statewide polls - results for the Democratic Party.

If anyone knows about recent, credible articles/studies on this subject, I'd love to read them.
 
Political polling in the USA seems to be much less accurate in the past decade or so than it has been in more distant past.

In 2016, nearly all polls indicated that Hillary Clinton would win election over Trump, easily so. In the wake of the 2016 polling debacle, the American Association for Public Opinion Research, AAPOR, the nation’s leading organization of survey researchers, released a report attempting to identify why pollsters got the election outcome so wrong.

This article reports on an interview with Courtney Kennedy, the Pew Research Center Director of Survey Research shortly following the 2016 election. It's interesting reading.

Among the points that the interview brought up:

* While the national polls generally came pretty close to the actual nationwide popular vote (which Clinton won by 2.1 percentage points over Trump), the performance of polls at the state level – where presidential elections actually are decided – was a lot spottier.

* In 2016 the polls underestimated Trump's performance and Kennedy was pressed to explain this outcome. She replied that: "Another interesting finding had to do with poll respondents’ level of education. A number of studies have shown that in general, people with higher levels of formal education are more likely to take surveys – it’s a very robust finding."

* Another point that Kennedy made: "There’s lots of evidence to show that the resources that news organizations have for polling seem to be declining over time, and that does two things, I think: There are fewer news organizations doing polling, and those that do – particularly local news organizations – are using very low-cost methodology"

Fast forward to 2024 and polls now are even less accurate than they were in 2016. But why?

Some cite the fact that polling is now done over the phone - either using cell phone or landline to contact participants. In a video I can't seem to find (I think it was a Meidas Touch episode), one of the Lincoln Project founders stated that he had no confidence in polls for this reason - namely that those participating in polls would have to be are persons who do not screen their calls and are willing to talk to a complete stranger, answering questions about politics for 30 minutes or more.

Fair enough - but this explanation seems pat - why would this favor one candidate over another (one might well argue that older individuals would be more likely to answer cold calls, granted, and that this would skew results towards Republicans - but why can't this be factored and adjusted for?) and why can't pollsters calibrate their data to account for whatever bias this might introduce. Indeed, we have Kennedy's finding from 2016 which points in the opposite direction - "Another interesting finding had to do with poll respondents’ level of education. A number of studies have shown that in general, people with higher levels of formal education are more likely to take surveys – it’s a very robust finding."

I haven't found any plausible explanation about why polls are consistently inaccurate and moreover why they consistently underestimate - at least in local/or statewide polls - results for the Democratic Party.

If anyone knows about recent, credible articles/studies on this subject, I'd love to read them.

Can't believe that I missed this being posted, as I've probably spent more than half my time on here talking about polling.

I am not a political scientist or polling expert, but I have a fair amount of experience with statistics... so in general you should take my thoughts with a massive pinch of salt.

The reason it's really hard to figure out what is going on with opinion polling is because there are multiple sources of error that all overlap and confusticate one another. The first you've mentioned above but the others are more subtle. Broadly:

1. Selection bias
Two big challenges here, both you highlight above. Firstly, the medium (phone/internet). Secondly, 'adverse selection', essentially you are more likely to end up with responses from people that are vocal about their political opinions. The problem with this is not so much that these errors exist, it's that they both exist at the same time and so isolating how much each factor is impacting your poll results in isolation is hard to correct for.

This is an interesting set of charts, also from Pew, grouping the US electorate into typologies (quite long though):

Basically, it shows that the most vocal Conservatives (what they call Faith and Flag Conservatives and Populist Conservatives) are a group three times bigger than the vocal Progressive Left. People on the left are more likely to be a less vocal "establishment liberal" or "democrat mainstay". This means that, ceteris paribus, you're something like three times more likely to encounter a vocal MAGA on the phone, than a vocal leftie. It's hard to know if the medium is exacerbating this further.

As for why these problems are seemingly worse now? Because the level of polarisation and the complete collapse of phone usage have both developed faster than the pollsters could adapt. They are flying blind in trying to disambiguate these two things.

2. Time variance
This is pertinent in both the contexts of the UK and US elections. As time goes by, even though pollsters ask the same questions, how those questions are interpreted actually changes as you get closer to the election. The question morphs from "what do you think of the incumbent government?" to "what do you make of the alternatives?" as you get closer to the election. This has led to some crazy polling changes in the weeks running up to the election, some of which are in this article:


This is why you see incumbents generally (though not always) gaining ground in the last few weeks running up to the election. The scrutiny shifts to the alternative. This probably works in Biden's favour, and against Labours, though how much depends on what they're up against. I predict that in the months leading up to the US election, the Trump vote will start to waver significantly as people go from "not Biden" to "on second thoughts, not Trump".

3. Herding
Basically, no pollster wants to be the odd one out. Especially if they make most of their money doing other types of research, as a bad general election prediction creates unnecessary PR risk. The problem with this is that if the tide begins to turn, and there is movement in the opinion polls, this desire to not turn into an outlier becomes a resistive force. If movements happen quickly, it takes time to drag the crowd of pollsters across to that position. Think Hilary and the FBI investigation.

This is particularly problematic because momentum is a big factor in public perception:


It's not just about your vote share, but also your direction of travel. People have higher expectations of parties if they have momentum, even if the vote share is the same. This in itself can have an impact on the results of the election - because here is one final underestimated factor in elections - people want to be on the winning team.


Bit of a ramble through the various sources of errors. So the question is, can this be resolved? Here is what it needs from my immensely layman perspective:

1. They need a new way of polling. This is the elephant in the room. I think some form of ID-validated pool of people who are carefully recruited to be representative. Maybe they answer polls through some kind of mobile application, but this will only work when the last vestiges of older non-phone users become a statistically insignificant group (which they aren't quite yet).
2. There are a few ways they could address herding as an industry, if the will was there (which it's not). Things like delaying the release of polls so they can't check their homework based on what others have just released, or independent monitoring.
 
Can't believe that I missed this being posted, as I've probably spent more than half my time on here talking about polling.

I am not a political scientist or polling expert, but I have a fair amount of experience with statistics... so in general you should take my thoughts with a massive pinch of salt.

The reason it's really hard to figure out what is going on with opinion polling is because there are multiple sources of error that all overlap and confusticate one another. The first you've mentioned above but the others are more subtle. Broadly:

1. Selection bias
Two big challenges here, both you highlight above. Firstly, the medium (phone/internet). Secondly, 'adverse selection', essentially you are more likely to end up with responses from people that are vocal about their political opinions. The problem with this is not so much that these errors exist, it's that they both exist at the same time and so isolating how much each factor is impacting your poll results in isolation is hard to correct for.

This is an interesting set of charts, also from Pew, grouping the US electorate into typologies (quite long though):

Basically, it shows that the most vocal Conservatives (what they call Faith and Flag Conservatives and Populist Conservatives) are a group three times bigger than the vocal Progressive Left. People on the left are more likely to be a less vocal "establishment liberal" or "democrat mainstay". This means that, ceteris paribus, you're something like three times more likely to encounter a vocal MAGA on the phone, than a vocal leftie. It's hard to know if the medium is exacerbating this further.

As for why these problems are seemingly worse now? Because the level of polarisation and the complete collapse of phone usage have both developed faster than the pollsters could adapt. They are flying blind in trying to disambiguate these two things.

2. Time variance
This is pertinent in both the contexts of the UK and US elections. As time goes by, even though pollsters ask the same questions, how those questions are interpreted actually changes as you get closer to the election. The question morphs from "what do you think of the incumbent government?" to "what do you make of the alternatives?" as you get closer to the election. This has led to some crazy polling changes in the weeks running up to the election, some of which are in this article:


This is why you see incumbents generally (though not always) gaining ground in the last few weeks running up to the election. The scrutiny shifts to the alternative. This probably works in Biden's favour, and against Labours, though how much depends on what they're up against. I predict that in the months leading up to the US election, the Trump vote will start to waver significantly as people go from "not Biden" to "on second thoughts, not Trump".

3. Herding
Basically, no pollster wants to be the odd one out. Especially if they make most of their money doing other types of research, as a bad general election prediction creates unnecessary PR risk. The problem with this is that if the tide begins to turn, and there is movement in the opinion polls, this desire to not turn into an outlier becomes a resistive force. If movements happen quickly, it takes time to drag the crowd of pollsters across to that position. Think Hilary and the FBI investigation.

This is particularly problematic because momentum is a big factor in public perception:


It's not just about your vote share, but also your direction of travel. People have higher expectations of parties if they have momentum, even if the vote share is the same. This in itself can have an impact on the results of the election - because here is one final underestimated factor in elections - people want to be on the winning team.


Bit of a ramble through the various sources of errors. So the question is, can this be resolved? Here is what it needs from my immensely layman perspective:

1. They need a new way of polling. This is the elephant in the room. I think some form of ID-validated pool of people who are carefully recruited to be representative. Maybe they answer polls through some kind of mobile application, but this will only work when the last vestiges of older non-phone users become a statistically insignificant group (which they aren't quite yet).
2. There are a few ways they could address herding as an industry, if the will was there (which it's not). Things like delaying the release of polls so they can't check their homework based on what others have just released, or independent monitoring.


I think you’re mostly spot on but have overlooked one thing which plays a big role, but mainly because it’s not about the actual polling.

It’s incredibly easy to make a poll or survey that leads to a desired outcome and what we have seen post-2016 is major news companies are relying on poll aggregators to (in theory) remove outliers.

However the RNC, and I’m sure the DNC too, realised that this allows them to directly influence the overall outlook of “the polling” by doing biased polls that predict Republican victories. These are numerous enough to effect the aggregate, yet because they’re just one of many, they don’t get the same level of scrutiny.

Fortunately proper news and polling companies have cottoned into this and actually after banning some repeat offenders, it turns out polling is historically accurate right now.

 

Don't have an account? Register now and see fewer ads!

SIGN UP
Back
Top
  AdBlock Detected
Bluemoon relies on advertising to pay our hosting fees. Please support the site by disabling your ad blocking software to help keep the forum sustainable. Thanks.