The hidden details of opinion polls

qtq80-GhuzPH.jpeg

The media incessantly presents us a never-ending stream of opinion polls: approval ratings, generic ballots, public opinions on issues, and so forth. The numbers can inform our debate, but as consumers, we’re often left eating Mystery Meat in terms of knowing how the data was collected. Polling methodology is crucial to results, and we deserve more details. Sometimes, we just get the raw number: “The President has a 38% Approval Rating, and that’s up 1% from last month.” Occasionally, we get the sample size and the confidence interval (“this number is thought to be correct within 3.5%, 19 times out of 20”.) Here are more things we need to know from pollsters:

 

When was the poll taken?

How recently? Was it before, after, or straddling some relevant news event that could change opinions? A poll that measures opinions about a President’s foreign policy chops, for instance, has more context if we know it was taken right after a diplomatic success-or a disaster. A poll taken before the big event may no longer be relevant.

 

How was it taken?

The answer here is often “by phone”, but that often brings more questions.

Were just landlines called, or were cell phones called too? (Landlines-only typically finds an older group of respondents.)   

What time of day were the calls made? (People who answer residential landlines during business hours are less likely to be employed. Does that matter?)

Were the questions and answers person-to-person, or was the questioner a robot? (There are lots of issues here, from the purported Bradley Effect of human interaction to respondents keying answers in incorrectly in response to a robot’s question.)

If the poll was not phone based, (so internet, mail, or man-on-the-street…), there might be additional self-selection issues with the sample.

 

What was the exact question asked?

Suppose two polling companies each report a Presidential Approval Rating of “38% Approve, 51% Disapprove”, but asked the following questions and got these results:

  • Company A: “Yes or No: Do you approve of the President’s job performance?”
    • Yes 38%
    • No 51%
  • Company B: “Thinking of the job the President is doing, how would you rate his performance?”
    • Strongly Approve 32%
    • Somewhat Approve 6%
    • Somewhat Disapprove 24%
    • Strongly Disapprove 27%

Company B’s results are more interesting. Logicians will point out that anyone who somewhat approves must also somewhat disapprove, and vice versa. But the question forces the “somewhat” responders into an either broadly positive (“somewhat approve”) or broadly negative (“somewhat disapprove”) bias. This lets us learn three additional things:

  1. Right now, the Somewhats are breaking very negatively
  2. With future good job performance (whatever “good” is), there’s a shot at boosting the 38% overall approval number significantly by picking up a large chunk of that 24% Somewhat Disapprove crowd.
  3. The real support base (32% Strongly Approve) and the group of people who will likely never be won over (27% Strongly Disapprove) are close in size.

 

What population target did the poll aim for?

General population? Likely voters? Senior Citizens? Something else?

Figuring out who is a “likely voter” may pose its own challenges, but a sampling of likely voters on a key election issue is probably more relevant than a sampling of the general population when it comes to predicting election results.

Perhaps the poll’s purpose is to check who firm a politician’s base is. If so, we want the poll’s respondents to come specifically from that base.     

 

What methods, if any, were used to try to make sure that the make-up of the sample matched the target population?

A common tool to try to get a sample to better mimic an overall population is Oversampling and Re-Weighting. That is,

  • ask survey respondents vague classification questions about themselves (sex, age range, income range, race, education level…) to create subgroups
  • keep the survey going until you have a decent sample size in each subgroup
  • Re-weight subgroups’ answers to reflect their known percentage of the overall population. (Oversimplifying: if we know a population is 52% male, but 1400 men and only 900 women answered the survey, re-weight the 1400 male responses to account for 52% of the poll outcome.)

Did the company use this technique, or another one, to make the sample stronger?

 

Leaving push-pollsters aside, most polling companies want to do the most accurate job they can. To be fair, some outfits make a lot of their methods’ nitty-gritty available on their websites or will speak to you if you call and ask. Sometimes, in an effort the publicize the headline number, the media simply chooses to leave the details out. Demand better! Before we eat any Mystery Meat, let’s find out how it’s prepared.