"When dawn spreads its paintbrush on the plain, spilling purple... ," Sons of the Pioneers theme for TV show "Wagon Train." Dawn on the mythic Santa Fe Trail, New Mexico, looking toward Raton from Cimarron. -- Clarkphoto. A curmudgeon artist's musings melding metaphors and journalism, for readers in more than 150 countries.

Sunday, August 28, 2016

'Wrong' track...Interlude, prof's primer on polls-I

Clark looking for the 'wrong' track, and right polling









It’s a long way and time from the banner headline “Dewey Defeats Truman” in the Chicago Tribune.
I’m not talking about politics and the press, but about something else most Americans are probably sick of about now, polling. Back in 1948, public opinion polling was in its infancy, contributing to that journalistic fiasco.
In the current election, and the one four years ago, we are being  deluged with more polls than ever. In 2012 they remarkably foretold the results, correctly predicting the electoral outcome in every state. 2012 poll accuracy
How? This year is still open to questions, and lots of time.
Forget the poo-pooers who argued with what they showed because they disagreed or didn’t want to believe, or thought they were biased.  There is room for suspicion and questions, but not denials of the science involved, if you just disagree with it...if you're a realist. Forget your political views of Huffington Post, or 538 or Real Clear Politics. In 2012 the poll aggregators of hundreds of polls nailed the results: 2012 polls.

It is a fact that most Americans don’t understand scientific polling and are right to be suspicious of them.
You rarely see polling explained. This is a version of what I wrote four years ago, urging Oklahoma newspapers to explain them.
So before we go to the poll findings about America being on the "wrong" track, here are some facts and explanations.
               Prof's primer on polling, part one
Hence, here’s the "Prof’s press primer on polling, part one."
Definition of terms is first.
    •    Population—The group to be surveyed, such as likely voters, residents of Oklahoma, likely voters in any election, etc.
    •    Random—Random does not mean “haphazard.” It means that every person in the "population" has an equal chance of being chosen. It’s easy in a classroom—you put every name in a hat and have a few names pulled out. Bigger groups require phone numbers or addresses, all more easily available than ever with computer data.
    •    Sample—The portion of the population to be chosen randomly to ask the poll questions.
    •    Valid—A poll is valid if the results collected from the sample can be applied to the entire population.
    •    Margin of error—Expressed as a plus and minus percentage. Every poll has flaws and variables that will affect the accuracy of the results, but the larger the sample, the lower the margin of error (If you poll everyone in the population there will be no margin of error, but that isn’t possible in most cases).
Now the key question—how big a sample do you need to conduct an accurate poll?
You’re not going to believe the answer. So first things first. Timing, wording of questions, training of the pollsters, polling methods, and other factors also affect a poll’s validity, not just the sample size. And sample size is not dependent on "population" size.
That said, to get a sense of how people in Oklahoma might vote on any issue,  or people in the United States,  for a five percent margin of error, you need roughly only 400 registered, or likely, voters selected randomly. Yep, that’s all. The better national polls try for 1,100 to 1,300 respondents, for about a three percent margin of error.
Here’s how the margin of error figures. Suppose the results come back showing Panhandle residents favor seceding from Oklahoma by a 52-48 percent margin, with a sample size of 400 people. The results are within the three percent margin of error so the election could go either way—it could be 52-43, or 47-52, or any combination. If on the other hand it was 75-25, Oklahoma would have a problem.
Also important in polling is the timing. A poll or electability two days after Romney winning the first debate is valid that day. But as fast as things change in this digital news country, it wouldn’t be valid in five days. Same is true in the current election--Americans are fickle. A poll the day after either convention is no longer "valid." That's what the pollsters mean by "bounce."
Other factors can affect outcome. People who say they will vote and don’t show up. Or a Hurricane could shut the place down. Or you could live in Florida.
As with everything in journalism, sources also matter in polls. Who conducted it?
But that’s a separate subject. Next, an American’s checklist for evaluating a poll.
Hint: USA Today once ran a story and headline at the top of the page about most American women wouldn’t remarry the same man, based on a Women’s Day survey. What was wrong with that?

Next, your checklist to evaluate all these polls.
Photo in National Railway Museum, York England, at the controls of one of the Royal locomotives (The King and Queen have their own trains)


No comments:

Post a Comment

Note: Only a member of this blog may post a comment.