While reviewing some early poll data today (it being Election Day 2014) I came across a website that started my mind thinking about the difference between what people say in political polls and marketing surveys versus what they actually do. The site is www.predictit.com.
I Don’t Believe in Gambling
First let’s get this on the table. I don’t believe in gambling and I live in Hawaii, where tight anti-gambling laws would make it difficult for local residents to ever participate. But the theoretical concept behind the PredictIt website is rather intriguing.
Statistical Prediction Models
Statistical Prediction Models, more commonly known as “polls” or “surveys”, are based on mathematical models where you analyze opinions from an identifiable set in order to predict a future outcome. For example, “Will you vote for John Doe on Election Day” can have possible answers of “Yes”, “No”, or “Undecided”. The set is registered voters in John Doe’s district. Students of statistics become familiar with standard deviations and coefficient of variations and they know that as you ask more and more people in the set (registered voters in John Doe’s district), the predictable qualities of the percentages of answers received (like 45% say yes, 25% say no and 30% say undecided) become more accurate when compared against the actual outcome of the event.
Think of it this way. If you ask only a few people in John Doe’s district, your probability of your answers matching the actual outcome is pretty small. But if you asked everyone in his district, and then they did what they said they would do, your results would match the actual outcome exactly. The trick to statistics is finding the right sample size where statically the responses would accurately predict the actual outcome.
Opinion versus Behavior
What drives pollsters and marketing executives crazy is when polls and surveys that have correct sample sizes turn out to be wrong. Often when this happens it is not because the statisticians got the sample size wrong. Rather it’s that the opinion of the person being asked didn’t match actual behavior. People, it turns out, are quite human. It isn’t all that uncommon to respond to questions such as “Would you buy XYZ product at XYZ price?” and then when XYZ product comes on the market at XYZ price that same person does not buy it. Opinion has a nasty habit of not matching behavior, which is why pollsters sometimes look very stupid on election days.
Behavior Statistical Models
It would therefore make sense, at least theoretically, that if you could take a sample of your set’s behavior, instead of your set’s opinion, that your sample would have a much better success rate of predicted outcomes matching actual outcomes. The problem is you can’t measure behavior on something that does not yet exist, (i.e. a product, an election). Until they actually do it, it is only opinion, which may or may not match ultimate behavior.
This is where PredictIt comes in with an interesting theory. Rather than asking questions of opinion, PredictIt asks people to participate, with their own money at risk, in the polling or survey process, by having them actually bet on the outcome rather than opining to the outcome. The idea is that by having participants behave (put their money up) against the outcome, you are moving what would normally be a study of opinions into a study of behavior, which, at least theoretically, would have higher levels of successful predictions of actual outcomes.
Watch the Outcomes Differently This Year
So, as you are watching the actual election returns come in tonight and you see “upsets” where polling data didn’t match actual results, think about statistics, opinion and behavior analysis. It’s More Than Just Numbers!