Illustrated arrow pointing left

News & Views from Alchemis

Illustrated arrow pointing right

Call us for a chat on +44 (0)20 7836 3678 or email Amanda Francis


Should the failure of opinion polls to predict the General Election affect confidence in Market Research agencies

 

Thursday 8 May was a bad day for Labour and a catastrophe for the Lib Dems but it was also an uncomfortable evening for the polling industry. Famously, all the big pollsters were wide of the mark and there will be an inquest into what went wrong and how to avoid making the same mistakes again but should this collective failure on behalf of the pollsters dent the public’s confidence in Market Research as a discipline?

For many, opinion polls in the run up to political elections are the public face of market research. Research is thrust into the spotlight and companies like YouGov and Mori become household names with their surveys making headline news and practitioners invited on to TV and Radio. The media exposure they benefit from presents valuable PR to the companies involved and the industry as a whole. But what happens when they get it wrong? Will the likes of Unilever spend less money on market research in the next 12 months because they’re confidence in the industry has been dented?

To explore this question thoroughly I suppose we’d need to commission a survey……but jokes aside even that would not provide an infallible answer either and it certainly wouldn’t be as accurate as a fundamental reckoning of every pound spent on Market Research for the period 8 May 2015-7 May 2016.

And this was the 1st problem the pollsters had to deal with at the election. Asking a representative sample what they might do is all good an well, and in 2015 the pollsters got a lot more right than they got wrong (we’ll look at this later) but the problem with elections is the prediction is closely followed by an actual result. The election provides an instant valediction (or not) of the prediction that is unique to elections. Cynics may argue this proves researchers may have been getting a lot else wrong over the years but got away with it because their findings haven’t be subjected to the same test.

But actually, the polls are normally right. There have been some notable exceptions – the 1948 US Presidential election, 1992 in the UK – and every time the industry has learned from what may have gone wrong and adapted methodologies to provide a better result next time around. The fact that May 2015 came hot on the heels of a similar failure to call the Israeli election correctly has caused some to question the wide spread use of online polling but from what I’ve read the online surveys were saying the same as more traditional methodologies.

The fact that 1992, 2015 and the recent Israeli poll failed to predict a late surge in support for the right wing incumbent inevitably makes one wonder whether this has something to do with it.

The 1948 election in the US predicted a successful Republican challenge to a Democrat incumbent and remains the biggest electoral upset in America, which questions the hypothesis that left leaning parties are always the victims of poll upsets.

The incumbency effect is now well known, favours the incumbent regardless of political hue and is factored in. At constituency level it may have been what persuaded Vince Cable and Alistair Campbell to mock the outcome of the exit poll. Their big mistake of course was to ignore the fact it was an exit poll – asking people who HAVE voted WHO they voted for – which is very different to a pre-election survey. Having said this, the 1992 exit poll predicted Neil Kinnock sweeping into Downing Street the following morning but I’m assured the polls are more sophisticated now.

1992 also gave us the ‘shy Tory’. This is now factored in as is the higher propensity of people that may be inclined to vote labour (when asked by a pollster for example…) subsequently not voting or being registered to vote. The reverse is true of conservatives it seems, particularly when their backs are against the wall.   Astonishingly the biggest absolute vote for a party ever in the UK was the much maligned Major government in 1992.

Equally bizarre is the fact that the previous record for most votes was held by Labour in 1951 in an election they lost, from a position of incumbency. The mathematical nonsense of the 1948 result is due to the 1st past the post system, which further complicates any attempt to predict the outcome of elections in this country. An outsider would surely find it mind boggling (and possibly a relief…) that our electoral system can deliver a result where the SNP require 26,000 votes per seat yet UKIP need 3.8 million votes to exercise the same influence in parliament. Or that an English Labour supporter needs 39,000 likeminded comrades but in Scotland would need 707,000 to land a seat.

But none of this was new in 2015 so what happened?

Well the experts got a lot right. They predicted the upsurge of support for the SNP despite there being no historical precedent for it. They predicted a backlash against the Lib Dems, if not their annihilation, and were not so far outside then margins of error when it came to the Tory/Labour vote. Their results regarding key questions like ‘best leader’ and stewardship of the economy consistently pointed towards Cameron and the Conservatives. No party has won a recent UK election without these two questions in their favour so perhaps the pollsters should have stuck their necks out a bit more than they did.

I suspect the inquest will seek to understand why enough people on the day chose to vote Tory once they got to the ballot box to push the result just beyond that margin of error and return a shock victory for Cameron. In particular they will want to know when these people made their decision and what last minute factors influenced it.

This is the perennial problem for market research agencies in any survey they perform. People do not always behave in rational or predictable ways. They often do not know what they are going to do next and struggle to explain what they have just done – this tendency among human beings to ‘post rationalise’ their decisions will further muddy the waters when people seek to find out what happened in May.

Books like Freakonomics are full of fascinating examples of people behaving in ways that are not entirely rational and well researched policies having unintentional consequences. Some of our clients are experts in understanding this non rational behaviour using ever increasingly varied and sophisticated means to help their clients understand what customers are thinking and doing. And why.

The bigger inquests will be held by the political parties, particularly Lib Dem and Labour. As the leadership contests get underway we are already hearing a lot about “what we did wrong” and the direction one or other party needs to move in. ‘Policy by focus group’ is usually a term of abuse but the losers in 2015 would do well to commission professional research into what the public want so that they provide a more electable alternative in 2020.

Finally, I had previously heard that a ‘wisdom of the crowds’ approach had called the 2010 election result as accurately as any survey and really hoped I’d find the approach had got it right in 2015 as well. I think it would have made for a better ending to my blog, but it didn’t. The surveys I saw predicted a hung parliament with no party securing more than 280 seats but, at the risk coming full circle, the respondents would have been making their predictions based on the collective wisdom of the myriad polls predicting exactly the same thing.

Leave a Reply

Menu