Pollwatch: Early thoughts on GE2015 polls
By the ComRes Political Team: Andrew Hawkins - Chairman, Katharine Peacock - Managing Director, Tom Mludzinski - Head of Political Polling, Andy White - Head of Innovation, Adam Ludlow - Senior Consultant, Tom Clarkson - Research Team Leader,

It has been a difficult few days for pollsters.  As we said on Friday, now is the time to review our methods.  ComRes fully supports the British Polling Council (BPC) and its widely publicised review of polling methodologies.  As founding BPC members we will cooperate with the review and have full confidence in its legitimacy.

Although we were pleased to be the most accurate of the pollsters at the 2015 election, it was a bittersweet moment.  We had worked tirelessly to adjust and improve our methodology, but, like all the pollsters, it still wasn’t close enough.

It is far too soon to draw firm conclusions, but different pollsters may have arrived at superficially similar inaccuracies via different routes.  This means that alongside any industry-wide review, we believe it is necessary to devote time and resource into unpicking our own findings.

We constantly review our methods.  Colleague Adam Ludlow picked up on a clear Conservative lead in the telephone polls from January to April.  Another colleague, Andy White, identified much higher levels of Conservative support among high turnout demographics (older, more affluent, home owning).  The result now gives us a benchmark against which to judge our findings, and our team of analysts is well placed to hit the ground running.

We are in the business of measuring what people think and do – and we normally get it right.  Some commentators (in particular, Matt Singh) have identified proxy measures which correctly called the election (e.g. leadership ratings, perceived economic competence of parties, expected error adjustments, the National Equivalent Vote calculated from local election results).  These factors will influence our thinking, but we know we must go beyond that to really interrogate how we measure what people think – because as well as measuring party support, our role is also to show why people think and act in certain ways.

We will need to better understand the factors at this election which made it more challenging to quantify what people thought and did, and then develop ways of reliably surmounting these obstacles in future.

Dealing with the five-year cycle

Unlike the USA or Australia, where high-turnout elections happen every two to three years, British voters face a five-year wait between what political scientists call “first-order” elections.  To put that into context, between the 2010 and 2015 General Elections, around 3.7m people in the UK died and around 4.1m people reached the age of 18.

During that period, Lib Dem support collapsed, the SNP surged in Scotland, UKIP surged in England, we switched to fixed-term parliaments, and a new system of individual voter registration was introduced.  We need to develop a way of better measuring opinion in volatile situations.

One area we may explore is the development of interim benchmarks to allow us to fine-tune our methodology over the course of the parliament.  This could perhaps include using some of the proxy measures like leadership, economic competence, and the National Equivalent Vote as early warning indicators.   Of course, any solution will need to be carefully evidenced.

Identifying the electorate

Most of the nationally representative surveys we conduct are of the general population.  Conversely, voting intention figures reflect the views of a subset of the general population: those who are both registered to vote (the electorate) and likely to vote, either via postal ballot or on the day.

The challenges here are twofold.  First, the newly introduced individual voter registration process is fluid (the deadline was 20 April).  The demographic profile of individual registrants is not as stable as the general population, for whom accurate census data can be used to establish quotas and weights.

Second, as Andy White argued before the election, some survey respondents undoubtedly overstate their likelihood to vote – and we do already apply adjustments to account for this.  Overstatement is complex, however, and may be the result of self-deception (not acting on good intentions), difficulty assigning a probability to one’s future behaviour, or embarrassment.

We may therefore need to explore stronger methods of identifying registered and likely voters.  Again, any approach would need to be robust and well evidenced.

Reaching the correct diagnosis

Much was made of “Shy Tories” at the 1992 election, and many pundits discussed the possibility of a new generation of Shy Tories in 2015, or at least a reactivation of the 1992 one.

We would strongly caution against treating this result as immediate vindication of that theory.  The percentage figures in a voting intention survey interact with each other – so understatement of one party’s support could in fact be a consequence of overstatement of another party’s support.  Equally, it could be a sampling issue.

This point is mentioned because ComRes conducted four ‘battleground’ polls on behalf of ITV News:

  1. Labour-held seats in Scotland (26-28 March)
  2. Lib Dem-held, Conservative targets in the South West (10-12 April)
  3. Conservative-held, UKIP targets in England (17-19 April)
  4. Conservative-held, Labour targets in England and Wales (24-26 April)

These surveys reduced the election to the series of two-way battles an election strategist would see when surveying the map of Britain.  Indeed, while there has been much talk of the multi-party nature of modern British politics, our first-past-the-post voting system still tends towards two-way contests (or safe seats) at the local level.

Fascinatingly, where the Labour Party was not a significant player our methodology appears to have held up relatively well.

Our poll of Lib Dem-held, Conservative targets correctly predicted the Lib Dems’ total collapse.  Likewise, our survey of Conservative-held, UKIP targets in England showed that UKIP were a long way from winning significant numbers of seats from the Conservatives. It also held up relatively well on the night.

Our survey of Labour targets in England and Wales perhaps gets to the nub of the problem. It suggested a 3.5-point swing to Labour when there was barely any swing at all on the day. This could be significant, because Labour-Conservative marginals are, in footballing terminology, the “six-pointers” of the political world.  One slip can have rather a large effect on your season.

The Scottish poll, while showing a mammoth 19 point swing to the SNP, similarly appears to have overestimated Labour support (although we should caveat that by saying that it was conducted before the main leaders' debates, and six weeks away from polling day).

The fact that swing inaccuracy was limited mainly to contests between Labour and its major opponents in England and Scotland was both destructive (it dented confidence in pollsters) but also highly instructive: it implies that – in the ComRes polls at least – overstated Labour support (a “Keen Left” perhaps), rather than just understated Conservative support may have influenced the figures (of course, like any complex system, multiple factors are likely to be at play).

So one of the issues we will be exploring in considerable depth is party-specific factors in voting intention polls.  It has been noted that, all things being equal, voters have a slight preference for the status quo.  We may therefore need to revisit the way we record voting intention, ascertain strength of support for a particular party, and distribute the preferences of undecided voters.  Of course, any such change would need to be carefully evidenced.

Our commitment

ComRes is committed to upholding the integrity and reputation of the polling industry.  Our overriding aim is not to treat this as a short-lived PR challenge, but rather to get the long-term statistics right.  We were closer than our peers, but not close enough.  Our review will be thorough, well resourced, and comprehensive.  We will share our findings with the independent British Polling Council review, and will support that process in whatever way we can.

We are committed to the highest standards of transparency, and will continue to communicate our findings via these regular Pollwatch briefings.