I am wondering if our polls are broken fundamentally. If and when I have time, I will try to understand it more.
“The mea culpas are plentiful, with Ian Large of Leger Marketing telling the Herald as election night results rolled in: “We were all wrong.”
The majority of polls heading into Monday’s election gave Danielle Smith’s Wildrose party at least a six-point lead.
The narrative leading up to the ballot box showed the upstart party toppling Alberta’s 41-year-old Tory dynasty.
Instead, Redford’s Progressive Conservatives won 61 seats in the vote, with a 10 point lead in ballots.
What’s clear is that different methodologies — telephone, Internet and a combination of the two — all predicted the same wrong story.
[…] “Public opinion changed dramatically, quickly and right at the end of the campaign,” said Janet Brown, a public opinion research consultant who accurately predicted the 72-seat Tory win in 2008. This election, the model she used to do seat projections showed the Wildrose winning a convincing majority government, with between 50 and 60 seats.“
* The Only Poll That Matters by Janet Brown is worth a read for some ideas even Janet’s own own projection model was a total failure as you can see in the last article.
11:14pm Update: This post is actually the result a good discussion on Facebook after a friend posted “‘Entire environment shifted’: Pollsters seek answers following Alberta election“. The following is an excerpt of my comments with minor word changes. [note: my friends’ comments have not been included here as it wasn’t my intention to put their somewhat private views public plus I haven’t got their permissions. ***]
* Reading the pollsters’ explanations are funnier than watching winter boots salesmen trying to justify why sales are bad in Thailand! Of course, without these pollsters what “news” will the media report?
I personally stopped answering any polls years ago. Am I wrong to think the younger generation even has less time for time-wasting polling calls?
* I suspect getting statistically valid sample of “opinion” is no longer an easy challenge. I remember reading about problems with phone surveys in the old old days. The problem then was the bias against people rich enough to actually have a phone line. Now, may be the bias is in people who have lots of time and don’t know how to say no. Or worst, some who has vested intrest in being polled in campaigns. In either case, it is hard to claim “random sample”. I am not an expert in the field, but I believe without the claim of “randomness” in the sample, I think it is game over. :(
* you made a good point re the importance of using other yardsticks (e.g. social media) but then they can be very subjective and can be skewed for the determined. My general point is re the “objective” tool of a poll. And that, I think we are running into some fundamental challenge. May be worthy of a university-level statistic research thesis/paper.
* [… name removed …], interesting discussion. Thanks for sharing your insights. The polls vs election results should definitely teach us something (as citizens and people in the field). I suppose finding out “what” have we been “taught” is part of the fun.