Pollsters Blow Yet Another Call: This Time it’s the Greece Vote
There is certainly much to study from Greece’s referendum about the geopolitical climate along the European Union’s southern flank, but not to be overlooked is a need for serious evaluation of the quantitative analysis models used to predict the outcome of the vote.
Without question there is a problem that needs to be solved regarding the accuracy of international polls. Flawed survey research has led to bad calls ahead of three major votes in the past five months, starting with Israel, then Britain and now Greece.
This unfortunate streak of fractured forecasting threatens to turn skeptics of survey research into cynics who disparage the industry as one of alchemy rather than science.
Let’s take a look at the scorecard:
- Every major poll in the final days leading up to the Greek referendum showed the vote was too close to call, yet the actual ballot count resulted in a large majority voting “no” to creditors’ demands for stricter austerity in exchange for another badly needed bailout of the Greek economy. The bottom line rule pollsters didn’t (or couldn’t) compensate for: voters electing to raise their taxes or cut their benefits happens somewhere between rarely and never.
- There were a string of miscalculations in the parliamentary elections in May in the United Kingdom, leading pollsters and analysts to predict the elections for the House of Commons would be so close that they would lead to the formation of a coalition government. It didn’t happen. The polls overestimated Labour’s strength, and the Tories and British Prime Minister David Cameron remained in power.
- Pollsters terrible trifecta began in March with their erroneous deadlocked prognostication on the eve of the elections for the Israeli Knesset. Blaming sampling and too much dependence on Internet surveys, the polls neglected to detect the efficiency of Benjamin Netanyahu’s 11th-hour hardline surge to the right that carried him and his party to victory.
Some social scientists and statisticians have begun to try to address the problem, but the fixes are elusive, with consumers using both hard-wired and mobile phones; an aversion by voters to participate in survey calls, or in some cases respondents outright lying to interviewers; and a penchant by pollsters to try to find a way to incorporate online data into their models and analysis.
“This is a difficult question because all polls are not created equally and many reported polls might have problems with sampling, nonresponse bias, question wording, etc.,” says Lonna Atkeson, professor of political science and Regents’ Lecturer at the University of New Mexico.
“The point being that there are many places where error creeps into your survey, not just one, and to evaluate a poll researchers like to think in terms of total survey error, but the tools for that evaluation are still in the development stage and is an area of opportunity for survey researchers and political methodologists.“
Another technique is to hold pollsters accountable by building a rating system that measures their success rate. One such list has been culled by baseball statistician turned election (and everything else you can predict using numbers) oddsmaker Nate Silver over at his highly cited 538 blog. Silver, who built his reputation on accurately calling the last two presidential elections, is not immune to making a mistake, but he burnishes his credentials by admitting his shortcomings, as was the case in the British elections. Most pollsters will duck, dodge or hide after they make the wrong call. Silver fell on his sword.
One forecast that Silver certainly nailed down after the British elections: “There may be more difficult times ahead for the polling industry.”
There are a lot of reasons why some folks are content to see polls fail, from a love of media-bashing to chronic fatigue for voters who feel they are repeatedly called by pollsters. “But if we should be skeptical of the polls, we should also be rooting for them to succeed,” the number-crunching Silver notes. “Even a deeply flawed poll may be a truer reflection of public opinion than the ‘vibrations’ felt by a columnist situated in Georgetown or Manhattan.”
This article was originally posted in Medium.