The polls suggested Ed Miliband would become prime minister last year

The real reason the pollsters got the general election so wrong

The real reason the pollsters got the general election so wrong

By Professor Patrick Sturgis

The run-up to any live television broadcast is a tense time. And that was especially the case for the general election results programme in May last year. Voters and politicians had tuned in for the outcome of the exit poll; a usually accurate predictor of who will form the next government. Presenters and producers knew the result a few minutes before they went on air. The tension they faced on that particular Thursday night in May 2015 must have been almost unprecedented because the exit poll suggested the opinion polls had got it badly wrong.

The exit poll forecast that the Conservatives would win 316 seats and Labour 239. The actual result was 331 and 232. The final opinion polls for the nine British Polling Council (BPC) members had all indicated a statistical dead heat.

This was clearly not a good result for the pollsters. But it was also not completely out of line with previous experience. The mean absolute error in the estimated Conservative and Labour vote shares from the final pre-election polls was 3.3% in 2015. It was 3.1% in 1997. The key difference was in terms of perception. In 1997, the polls failed to predict the size of the Labour landslide. In 2015, they suggested an altogether different outcome, a hung parliament in which the Scottish National Party would hold the balance of power. Those poll-induced expectations are important because they shape party strategies and media coverage and likely affect whether and how people vote. 

In the days following the 2015 election there were various conflicting suggestions as to why the opinion polls had got it so wrong. As a result, the two leading trade associations, the British Polling Council (BPC) and the Market Research Society, immediately agreed to commission an inquiry.

Naturally individual polling companies conducted their own post-mortems but ours was the only inquiry able to draw on raw data from all nine members of the BPC, while still being independent of the polling industry.

We considered all reasonable possibilities for the error, using raw polling data from the nine BPC members.

The rise in the number of people voting by post was one possible factor. Could the pollsters have failed to take sufficient account of this change? We concluded there was no reason to believe they had. The number of people registered overseas to vote was up significantly in 2015. But there were simply too few overseas voters to have made a meaningful difference.

Another possible cause was the changes in voter registration method in the run-up to May 2015. That might have affected predicted turnout. But again, we found no evidence that this was a major factor.

One of the most popular explanations in the immediate aftermath of the election was that we had seen a repeat of the polling failure of 1992 where 'shy Tory' voters were believed to have swung the election. The suggestion here is that people believe a vote for the Conservatives indicates a preference for personal gain over public good and are therefore less likely to admit voting Conservative. This can be addressed via the way poll questions are worded and framed.

However, we found no evidence to support the idea that the questions encouraged this sort of distortion. Might people still have deliberately misreported their intention to vote Conservative? Again, though we can't definitively rule out the possibility, circumstantial evidence from surveys undertaken after the election suggests it is unlikely to have been a contributory factor. Similarly, while there was a tendency during much of the campaign for phone polls to find higher support for the Conservatives than online polls, there was no difference between them for the all important final polls.  

Another possibility, of course, is that the opinion polls were not actually wrong but there was a 'late swing' to the Conservatives. That can be tested by looking at the results of surveys undertaken in the immediate aftermath of May 7th with the same respondents interviewed in the final polls. Those numbers suggest there is, at best, weak evidence of a small, late swing to the Conservatives.

Differential turnout reporting between support for different parties might also have made a small contribution to the polling errors. Pollsters have to judge the probability respondents will actually vote. We found some tentative evidence to suggest Labour voters had been less likely to actually vote than predicted but, even if this was the case, the effect would have been very modest.

So, if none of those factors were the source of the problem, what was? Our conclusion is that the primary cause of the polling miss in 2015 was unrepresentative samples. The methods the pollsters used to collect samples of voters systematically over-represented Labour supporters and under-represented Conservative supporters.

We reached this conclusion partly by eliminating other possible causes, but also by identifying inherent weaknesses in sampling and weighting procedures. The inquiries into the 1992 UK and 2008 US polls also concluded that unrepresentative samples were contributory factors.

A surprising feature of the 2015 election was the lack of variability across the polls in estimates of the difference in the Labour and Conservative vote shares. We found evidence that 'herding' – whereby pollsters make design and reporting decisions that caused their polls to vary less than expected given their sample sizes – played a part in forming the statistical consensus. This does not, however, imply malpractice on the part of polling organisations.

Our report makes a number of detailed recommendations designed to address this issue collectively. We also call for more transparency in how polls are conducted and reported by the polling industry and in the media.

But we shouldn't get carried away. There are improvements that can and should be made to how polling is currently practised in the UK, but polling remains the most accurate means of estimating vote shares in election campaigns and this is likely to remain the case for the foreseeable future.

Our full report and findings are published today and will be presented and discussed at a special session of the National Centre for Research Methods Festival later this year.

Professor Patrick Sturgis is the director of ESRC National Centre for Research Methods, University of Southampton and led the independent inquiry into the polling industry commissioned by the British Polling Council

The opinions in politics.co.uk's Comment and Analysis section are those of the author and are no reflection of the views of the website or its owners.