The Problems With Polling

January 22, 2018 § 6 Comments

I was reading a scholarly article on polling and the issues it creates in terms of the democratic process last week.  In the article, the authors note many of the problems with polling, and there are many.  I worked for a major national polling firm in Canada for a couple of years whilst in undergrad.  There, I learned just how dodgy supposedly ‘scientific’ polling can be.

My issues have less to do with methodology, where random computer-generated phone numbers are called.  Rather, they have to do with both the wording of questions and the manner in which they are asked.  I should also note that the rise of cell phones complicates the ability to do random sampling.  Something like 48% of American adults only have cell phones (I have not had a landline since 2002, a decade before I emigrated to the US).  It is illegal to use random computer-generated calling to cell phones in the US.

The authors of the study I read commented on the manner in which questions were worded, and the ways in which this could impact results.  For example, last year during the great debate about the repeal of Obamacare, it became very obvious that a not insignificant proportion of Americans did not realize that the Affordable Care Act, or ACA, was the legislative act that created what we call Obamacare.  So you have people demanding the repeal of Obamacare, thinking they would still have their ACA.  Obamacare was originally a pejorative term created by (mostly Republican) opponents to the ACA.  They figured that by tying the legislation to a president wildly unpopular amongst their constituency (if not the population as a whole), they could whip up public opposition to the ACA.  It worked.

But now consider a polling question concerning the popularity or unpopularity of Obamacare/ACA.  Does a pollster ask people about their thoughts on Obamacare or on the ACA?  Or does that pollster construct a question that includes the slash: Obamacare/ACA?  How, exactly does the pollster tackle this issue?  Having worked on a team that attempted to create neutral-language questions for a variety of issues at the Canadian polling firm, I can attest this is a difficult thing to do, whether the poll we were trying to create was to ask consumers their thoughts on a brand of toothpaste or the policies and behaviours of the government.

But this was only one part of the problem.  I started off with the polling firm working evenings, working the phones to conduct surveys.  We were provided with scripts on our computer screens that we were to follow word-for-word.  We were also monitored actively by someone, to make sure we were following the script as we were meant to, and to make sure that we were actually interviewing someone taking the poll seriously.  More than once, I was instructed to abandon a survey by the monitor.  But the monitor didn’t listen to all the calls.  There was something like 125 work stations in the polling room.  And 125 individuals were not robots.  Each person had different inflections and even accents in their voices.  Words did not all sound the same coming out of the mouths of all 125 people.

When I had an opportunity to work with the monitor to listen in on calls, I was struck by how differently the scripts sounded.  One guy I worked with was from Serbia, and had a pretty thick Serbian accent, so he emphasized some words over others; in most cases, I don’t think his emphasis made a different.  But sometimes it could.  Another guy had a weird valley girl accent.  The result was the same as the Serbian’s.  And some people just liked to mess with the system.  It was easy to do.  They did this by the way they spoke certain words, spitting them out, using sarcasm, or making their voice brighter and happier than in other spots.

Ever since this work experience in the mid-90s, I have been deeply sceptical of polling data.  There are already reasons, most notably the space for sampling error, which means that, with the margin of error, most polls are accurate within plus or minus 3%.  That doesn’t sound like a lot, but the difference between 47% and 53% is significant when it comes to matters of public policy.  Or support for candidates.  And more to the point, the media does not report the margin of error, or if it does, does so in a throwaway sentence, and the headline reads that 47% of people support/don’t support this or that.

But, ultimately, it is the working and means of asking that makes me deeply suspicious of polling data.  And as polling data becomes even more and more obsessed over by politicians, the media, and other analysts, I can’t help but think that polling is doing more than most things to damage democracy, and not just in the United States, but in any democracy where polling is a national obsession.

Advertisements

Tagged: , , , , ,

§ 6 Responses to The Problems With Polling

  • I am suspicious of polling data for several reasons. Semantics matter, sampled population matters, and lastly, for all the polls released practically daily in the US about various things, I have never once been contacted by any pollster in 40 years. And I’ve gotten jury duty twice in two different states! Maybe I should consider myself lucky that I haven’t made it into their apparently magical database. I’m being silly, obviously, but I find anecdotal experience to be as relevant as these polls.

    • Indeed, I have social scientist friends who assure me that random polling, sampling, etc., is kosher. But they cannot answer my critiques without referring back to methodology, so they’re answering my questions of methodology with methodology. That’s just circular logic. As for being called, I have lived in 3 provinces and 3 states as an adult, and have never been called either.

  • Brian Bixby says:

    Or look at the problems from the other end: not methodology, but how people respond to it. You’ve already made two points along those lines: people who are never contacted by polling organizations are more skeptical, and the failure of newspapers to report margins for error (or for that matter even note if the polling sample was random or not) gives the public the wrong impression.

    One of the arguments I repeatedly ran into starting last November ran like this: the polls were wrong about who would win the election, the media reported the polls as if they were reliable, hence the media is fake news. I don’t need to emphasize how prevalent that line of reasoning was with Trump supporters. But it was not an easy argument to counter, for it required explaining how polling works, why the media would use them, and then how the results could still be “wrong” without being “fake.”

    • I agree on both counts. And one thing that is in the literature on polling, but does not get reported, is a discussion about how people respond to polls. I don’t mean people who purposefully give fake answers, but oftentimes people give answers they think the pollsters want, or they do not wish to express their racism, sexism, homophobia, etc.

      And then there’s the lazy journalism. I don’t know if this is still the case, but back in Canada, whenever polls were reported in the media, the margin of error was always reported with it, a sentence like “this poll is accurate within +/- 3 points, 9 times in 10.” So, if you stop and think about this, that’s a 6 point swing, and 10% of the time, this poll is useless. I don’t see that in the American media. The best I see is a talking head who occasionally says that candidates are both within the margin of error. But that’s it.

      • Brian Bixby says:

        Having spent my high school years in a private school, I don’t know if basic statistical knowledge is taught in American high school math classes. I suspect not. Which means even including a margin of error of any kind won’t necessarily be understood.

        Similar problems with forecasting models: not only do people not pay much attention to the details of the statistics, they often don’t check the assumptions behind a model. You and I already talked a few years back about a study claiming that welfare recipients could make more money than legitimate workers, and the many questionable assumptions behind that conclusion.

      • I seriously doubt anything in depth is taught about statistics in the average high school math class. Even in politics courses, where polling data is most commonly discussed, I don’t think there’s much about this. So, yes, there is a problem with basic information like what a margin of error is.

        Forecasting models, indeed. This is something that has become somewhat of an obsession in the sports world, wherein ESPN, or some other network, will use a video game or other such programme to forecast the upcoming NBA/NHL/NFL/MLB season. They run 1,000 or so seasons for each team, with similar variables, and, say, of those, 1,000 times, the Patriots are predicted to go 13-3 and lose in the Super Bowl in 894 of them. So they report that, but no one seems to pay much attention to the other 106 seasons, to say nothing of the 30,000 other seasons run through the prognositcating programme.

        I really don’t trust either mathematical models, but polling in particular, because I’ve seen the human aspect massively subvert the mathematics behind the surveys.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

What’s this?

You are currently reading The Problems With Polling at Matthew Barlow.

meta

%d bloggers like this: