|Rusting locomotive, Skagway, Alaska, Not all polls are as reliable as others and can lead to the wrong track|
|Old Alaskan trestle and tracks--like some polls, deserve skepticism|
This link explains why they were on the "wrong" track. What they really needed was my "Prof’s Primer on Polling," especially this editor’s checklist for evaluating polling, or any research.
|Poll aggregators show a better idea than just one poll|
Americans are justifiably suspicious of polling, and not just for political reasons. Some of the fault lies with newspapers and broadcast which have reported off-beat “research” to try to snag audience. Last post I referred to USA Today once carrying a Women’s Day’s poll saying readers wouldn’t marry the same man is just one example. The poll was loaded with faults, and the media failed to point them out. Then Fox picked up the myth and continued with it.
|On London's "Tube," the "right" track is always clear|
- When was the poll conducted? Results change overnight as the RCP chart shows.
- Check more than one poll for more accuracy.
- Who sponsored the polling? Is there a conflict of interest? Political polling is especially suspect if it’s loaded with distrust for the other side. Beware any polling done by a PR firm for a client. I understand the Republicans with their distrust of the so-called “liberal media” wanting to conduct their own polls. But Romney's campaign fell victim to the same faults they suspected from the other side.
- Who was included in the polling? If you poll certain age groups or geographic areas or don’t reach likely voters, you’re results will not be valid. See number three.
- How were the people chosen? Was it a true “random” sample? If not the results will be skewed (as with Romney’s polling).
- How many people were in the sample? If the poll has fewer than 1,100 respondents, your margin of error is going to be more than three percent.
- What was the response rate? If your sample was 1,000 and only 400 answered, you get results like the Women’s Day article which threw out more than it counted. Don’t seriously consider any response rate less than 60 percent.
- How accurate are the results? Always compute the margin of error—it results were within it, then “it’s too close to call.”
- Who were the interviewers? Were they professionally trained and neutral? If not, prejudice, inflection of the voice and other factors can affect results.
- How was the polling conducted? Robo-call? In person? If it was a call-in or send-in, the results are worthless because they’re not "random." See definition in previous post.
- What were the actual questions asked? Wording can influence results, and can lead to opposite results on the same matter, depending on wording.
- Are the results cause and effect or just correlation? I saw a headlined story once: “Want to live a longer life? Marry yourself a younger wife.” The poll found that men with younger wives lived longer. But—this might not be cause and effect because there are many other factors involved—health, wealth, etc.
- Does the headline match the polling results?
A few of the faults of the Women’s Day poll and article: 1. It’s not random for all women in America, only subscribers. 2. Women’s Day has more than one million subscribers. About 100,000 responded. 3. The results were clipped out of the magazine and sent in (pre-Internet). 4. Once the magazine got the results, some of them were not counted.
So, a responsible newspaper or broadcast station reporting polling results should include an explanatory item with every story. If it doesn't, be suspect. It should be written along these lines:
“The Daily Geezer poll of 1,200 registered voters in Geezer County was conducted Oct. 31, 2016, and asked two questions: ‘Will you vote Nov. 8?’ and ‘Which presidential candidate will you vote for?’ The results have a margin of error of plus and minus three percent.”
Next--polls showing America on the "wrong" track.
and run with very little error, as Forest Park railroad, Fort Worth