What do polls actually tell us? | The Excerpt

On a special episode (first released on September 11, 2024) of The Excerpt podcast: In the wake of the Trump-Harris debate, there's a firehose worth of new polls out, each purporting to show exactly how the event impacted voters. But what do they really mean? David Paleologos, Political Research Center Director at Suffolk University, a USA TODAY partner, joins The Excerpt to talk about the science and strategy of polling.

Hit play on the player below to hear the podcast and follow along with the transcript beneath it. This transcript was automatically generated, and then edited for clarity in its current form. There may be some differences between the audio and the text.

Podcasts: True crime, in-depth interviews and more USA TODAY podcasts right here

Dana Taylor:

Hello and welcome to The Excerpt. I'm Dana Taylor. Today is Wednesday, September 11th, 2024, and this is a special episode of The Excerpt.

Following last night's debate, you can be sure there'll be a flurry of polls released, each claiming to define what impact the night had on voters. What are they really telling us, and which ones can we trust? USA TODAY partners with Suffolk University on polling. I'm joined now by David Paleologos, the director of the Political Research Center at Suffolk University, a partner of USA TODAY, to dive into these questions and to teach us more about polling in America.

Thanks for being on The Excerpt, David.

David Paleologos:

Thanks for having me.

Dana Taylor:

There are just over 50 days to the election now and it feels like there is a fire hose of polling data coming out every day. A lot of people think about polls as predictions or forecasts. Is that how we should think about them or are they something else?

David Paleologos:

No, they're snapshots in time. So we're only limited to the time window that we reached out to voters and they should not be predictive. The only time they really can be somewhat predictive is if a survey is taken right before the election within the last week. But if we're 50 plus days out, all we can really measure is what the current temperature is today, not what it will be in 50 days.

Dana Taylor:

When it comes to the subject and specificity of polls, we have national polls versus statewide polls versus local polls. We have polls that name candidates and polls about ideas. How does the subject and specificity of a impact the results?

David Paleologos:

The polls that are taken early on in the year, in the winter and in the spring, they're more issues-oriented where candidates are less important than those are called benchmark polls. And then, as you get closer to the election, picture a cone. You have a very wide end of the cone where you're interviewing all residents or all inhabitants of a particular area, but then that cone narrows as you get closer to the election, and then you're only interviewing registered voters, and then you get a little bit closer to the election and you're only measuring likely voters. And so it's kind of a moving scale, but each poll, whether it's an issue poll, a benchmark, or a poll with names or without names has its own value and gives us a little bit more information than we had prior to taking the poll.

Dana Taylor:

David, what about the trustworthiness factor? Are all polls created equal here?

David Paleologos:

All polls are conducted differently. I guess that's the best way to put it. One good thing about this science, and I think pollsters are probably the second hated group of people besides the candidates, but there is a silver lining in the cloud, and that is that there's actually an endpoint that it's not just this nebulous research that never ends, that the endpoint is election day and the election results are finality for us. So we're all judged against us.

Fivethirtyeight.com has a database of over 500 pollsters in the country. RealClearPolitics does the same rating. And there were polling aggregators all across the country and in the world who rate us so that we really can't justify having a bad poll or a bad year. Can't really talk your way out of it. You're either right or you're wrong. You're either within the margin of error or not, and that's exciting for us. On our website, at suffolk.edu/SUPRC, you can look at some of the top pollsters according to fivethirtyeight.com and realclearpolitics.com. Any one of those listed are worth taking a look at and giving its due weight when considering which polls are legit and which are not.

Dana Taylor:

As you mentioned, polls often come with a margin of error in some state that the spread, that's the difference between two outcomes, is within that margin. What does that really mean?

David Paleologos:

Margin of error is plus or minus. A lot of times people don't understand how many polls are within the margin of error because they view the difference as being the key component, the key statistics. For example, if you have a candidate winning 45 to 40 and the margin of error is 4%, four percentage points, you would immediately say, well, that's outside the margin of error. 45 minus 40 is five, and five is bigger than the margin of error, which is four. So it's outside the margin of error, but that's not how it's calculated. What 45 represents is a range of points, plus or minus the error rate, 4%. So that 45, the person that candidate getting 45 could be anywhere from 41 on the low end all the way up to 49. That's a sliding scale. The midpoint is what we're reporting, which is 45.

Similarly, someone getting 40% and the polls could be on the low end of 36 and the high end of 44. When you look at both numbers, there's some overlap, and that's why a poll that has a margin of error of four can be within the margin of error, even though the spread, the difference was five points.

Dana Taylor:

Let's do a barometer check and make this easy on me. What constitutes a good poll and what would you say is a bad poll?

David Paleologos:

A good poll adequately represents the participants in a voting, in an election in the right proportions. So if you are up polling a state or the country, the key is in having 10,000 respondents necessarily. It helps having more respondents, but it's not the end all be all. The key is having the right proportion. Do you have the right amount of white voters, Black voters, young voters, older voters from a particular region, geography, income levels, education levels? Those proportions have to be right. If they are, it's a good poll. If they're not, if you're interviewing too many men or too many of one demographic, then it's not.

Think about baking a cake and you have a recipe and that recipe says you have to have a certain amount of sugar, two tablespoons of sugar, a tablespoon of salt, a cup of margarine. Whatever those ingredients are, they have to be of the right proportion for that cake to taste good when it comes out of the oven. Those proportions are way off. If you have five teaspoons of sugar or only a little sliver of margarine or butter, then it's not going to taste right. And so it's the same thing with a poll. We can tell when we look at a poll whether the poll tastes right statistically, if the proportions of demographics are correctly aligned.

Dana Taylor:

I can tell you that that's not going to be a very good cake with so little sugar no matter how you do it. Can you talk to us about how polls are conducted overall? How is the size of the pool decided? How are people chosen to be respondents for a poll? And then finally, how can we know if these people selected truly represent the electorate?

David Paleologos:

Well, the second part of your question, we know that they're truly representative of the electorate if they say they're likely to vote, number one. If they're not, they shouldn't be included this close to the election. And then, do they have the demographic makeup that represents what the target universe population is? So those are the things that we look at.

In terms of methodology, what's different now than 40, 50 years ago is that we all do it differently. 50 years ago, it was Gallup and it was Pew and everybody was called on a landline. Today, some people do landline polling, but some people do live call or cell phone only. Some people do mostly cell phone and some landline. That's what we do. Some people do online panels, internet only. Some pollsters choose to do the IVR methodology, which is robo-calling, press one for Harris, press two for Trump, press three for undecided. So everyone has different methodologies. The key is that however you're accruing this data, that the proportions of the demographics need to be in the proper weights. They have to have the proper respective percentages such that all likely voters are represented in the correct proportion, and we don't believe that that's the best motivation to take a survey and can skew what their response levels are.

Dana Taylor:

Republicans or Democrats more likely to respond to polls. Generally, how do poll respondents political leanings impact results, and how is that communicated to the public?

David Paleologos:

Independents are actually tougher to reach then Democrats and Republicans because independents, by their nature are not enrolled in a party, and we call that unenrolled or no party affiliation or independent affiliation. So they don't feel like they're qualified to answer a survey. They don't feel like they know enough about politicians, Democrat or Republican. So Democrats and Republicans are pretty dug in. They know who they're voting for. They know what they're about. They know what party they're enrolled in. Independents are probably less willing because of the fact that they're detached from the political process. They don't like politics, pollsters or political campaigns. And so you may experience in certain areas, certain states and certain regions a higher inability to reach that person and complete a survey.

Dana Taylor:

What's an example of when a person is spinning the results of a poll or reading too much into what a poll might suggest?

David Paleologos:

This happens a lot of times. There are journalists, there are people who read our polls who have what we call a predisposed bias. So they're looking at the polling through their own personal filter of who they want to win, and they'll find only the pieces of the poll that accelerate that dynamic. And so what you can have is you can have columnists, even editorial boards looking at a poll vastly different from another newspaper's editorial staff, and come out with two different conclusions. Oftentimes, I see that when I just post the poll results on Twitter. I don't put any spin on it. Either way, I'll state the statistics and the statistical trends, and I can see the people commenting, taking the pieces of that tweet, retweeting it with their own spin about how that would benefit either a Democrat or Republican candidate. We can't control what others say and their interpretation. Our job is to get the best snapshot of the public's view on a particular issue, on a particular political race as we possibly can. And once it's published, people will take that data and they'll use it for their own purposes.

Dana Taylor:

Finally, David, what's one takeaway about polls you want to leave our audience with here?

David Paleologos:

Generally, the polls have done a pretty good job. The pollsters have done a pretty good job. I have a little bit more optimistic view about polls than a lot of other people do. A lot of other people want to find a reason to make fun of us and get mad at us. But what I would say is because there are so many pollsters doing their work on different platforms, and if they agree on a particular outcome, there's a pretty good chance that that outcome will stand.

The other takeaway I would say for your audience is if you see a poll that's vastly different than the others, don't discount it. Suffolk University has been on that end of it. In 2008, we had Hillary Clinton beating Barack Obama in New Hampshire, and every other pollster had Barack Obama winning by 15, 20 points or double digits in many cases. And we had two Bellwether polls showing Hillary Clinton winning in our statewide poll, had it within the margin of error, and we were mocked and made fun of for every hour leading up to when the polls closed. But when the polls closed, we were right. There is a very small possibility. There's that one in a million possibility that a pollster can be way different than the rest of the polling community and be right. So take it easy on them if you want to make fun of them on social media or email them because you may be eating your words when the final count has been taken.

Dana Taylor:

David, thank you so much for joining me on The Excerpt.

David Paleologos:

My pleasure. Thank you.

Dana Taylor:

Thanks to our senior producers, Shannon Rae Green and Kaely Monahan, for their production assistance. Our executive producer is Laura Beatty. Let us know what you think of this episode by sending a note to podcasts at usatoday.com. Thanks for listening. I'm Dana Taylor. Taylor Wilson will be back tomorrow morning with another episode of The Excerpt.

This article originally appeared on USA TODAY: What do polls actually tell us? | The Excerpt