How accurate will the Scottish independence referendum polls be?

This is an updated and modified version of a blog post that appeared at on 11th September.

The vote intention polls for the Scottish independence referendum seem to have been taken largely at face value by commentators, politicians and even the financial markets. In particular, the roughly equal split of the vote between Yes and No in several recent polls is being interpreted as evidence that the result of the referendum is likely to be close. But how accurate are opinion polls as predictors of referendum outcomes and how accurate are polls for elections in Scotland more generally?

Looking at 16 recent and/or pertinent constitutional referendums (listed below) we can consider the general accuracy of the headline figures of final polls (after Don’t Knows have been set aside) on the occasion of such ballots. I use ‘final’ in the sense of last, averaging as many polls as were clustered together closest to polling day, but not more than one per company. I have ignored cases where there were not at least two polls within ten days of the vote.

In no less than 12 out of the 16 cases the average vote for Yes (which in each case was also the change option) in the final polls was higher than was found in the ballot boxes. The twelve include the referendums on introducing the Alternative Vote in 2011 (5 point difference), Welsh devolution in 2011 (4 point difference) and 1997 (3 points), the Good Friday agreement in 1998 (3 points), Quebec independence in 1995 (4 points) and Scottish devolution in 1979 (3 points). The 12 also include the four point over-estimation of the Yes vote in the 1975 European Community referendum, in which a Yes vote was strictly speaking a vote for the status quo, but only a recently established one. In truth some of these averages mask spot on predictions (such as ICM in Wales 1997 and for the AV referendum) but average tendency is still remarkable.

Strikingly one of the remaining four cases was the Scottish Parliament referendum in 1997, which involved two questions. The final polls (after excluding Don’t Knows) underestimated the vote for establishing a parliament by one point, while support for tax raising powers was underestimated by four points, the only case where the Yes vote was underestimated by more than the margin of error for an individual poll. By contrast, the polls overestimated Yes by more than the traditional +/- 3 margin of error in seven of the 16 referendums.

It is perhaps also significant that the only two cases where the vote for change was underestimated by more than a percentage point (Ireland May 2012 and Scotland 1997) were ones where the Yes vote was more 60%. While there are examples of close referendums in which the Yes side did worse than expected (such as Quebec in 1995 where the final polls said Yes but the result was No) there does not seem to be a precedent for a close referendum in which the final polls underestimated the Yes vote.

The tendency for final polls to differ from the actual result does not necessarily mean that referendum polls are biased towards Yes responses. It might be that the Don’t Knows split disproportionately towards No, that those in favour of the proposition tend to be less likely to turnout to vote, while late swing is also a possibility. Whatever the reason, the experience of referendum polls in the UK and internationally suggests that the findings of final polls (from which the Don’t Knows have been removed) are typically flattering for the Yes camp.

The record of polls at recent elections in Scotland is also sobering for the Yes camp. True, the SNP secured a majority of seats in the Scottish Parliament in 2011 exceeding the expectations of the pre-election polls. But they did so on just 45% of the vote, and the polls only seriously underestimated the regional, not the constituency, vote. However, the SNP managed only 29% of the vote in May this year, well below the 37% at which it stood in the two polls conducted closest to polling day. The party similarly under-performed relative to the estimates of the polls at the 2010 general election. So while the experience of recent elections provides mixed evidence, it gives more reason to believe that the strength of the nationalist cause is being over- rather than to think it might be under-estimated in the referendum polls.

But apart from analysing the performance of the polls at previous referendums and elections, it is important also to consider how the particular nature of this extraordinary referendum might lead to a discrepancy between the final polls and the actual result.

This is the first major ballot in the UK for which the polls have overwhelmingly been conducted on the internet. Proponents claim respondents give more honest answers to sensitive questions online because there is no interviewer. Estimates of the average differences between polling companies up to 6th September show that the internet polls have produced, on average, considerably smaller leads for No over Yes than those which use other modes, whether telephone (Ispos-MORI) or face-to-face (TNS-BMRB). In particular, the newest internet pollsters (Panelbase and Survation) have had more encouraging results for the Yes camp than the much more experienced YouGov, with ICM, long-established but relatively new to internet polling, in between. However, these hitherto relatively stable differences seem to have disappeared in the last week. Unless they re-emerge any discrepancy that may occur between the result and the average of the final polls will not be attributable to prevalence of internet polls.

Differential turnout, in which one side’s supporters turn out in larger numbers, could lead to a gap between the polls and the vote. At elections some pollsters try to adjust for differential turnout by weighting respondents according to their self-reported likelihood of voting, but this does not always make their headline figures more accurate. Several recent referendum polls have had over 90% saying that they would certainly vote, leaving little room for weighting by differential turnout to make much difference.

It has been suggested that it will be psychologically more compelling for supporters of independence to go to the polls than it will be for those who want to stay in the Union. But we do know that the grassroots campaign activity of the Yes campaign is more intense than that of the No campaign, which might both lead to greater success at winning late converts as well a higher turnout by Yes supporters on the day. That said, if turnout is as high as it is expected to be, the scope for significant differences between the turnout rate of the two camps will be relatively limited.

We also have to consider how the Don’t Knows will split. Research from Canadian electoral reform referendums suggests that Don’t Knows split disproportionately towards the status quo, and many commentators think that that is what will happen on Thursday. This does seem more likely than a split in the other direction, though an especially big movement from Don’t Know to No in the last hours of the campaign also seems unlikely. If someone is still undecided at the end of a long and intense campaign on such an important issue it probably means they really are not sure which way they will go.

An important consideration is whether the respondents to polls are a representative sample. John Curtice has an excellent piece on this that argues that although previous non-voters are less likely to appear in polls they are actually also less likely to vote Yes. So there is not much in the poll data to suggest that there is a particularly strong tendency amongst the kinds of people that pollsters have been unable to reach to vote Yes; rather the opposite seems more likely to be the case.

Above all, of course there is no guarantee that respondents to polls vote the way they tell a pollster they will. The pro-Labour bias in the opinion polls on the occasion of the 1992 UK general election was attributed in part to a ‘spiral of silence’ or ‘shy Tory’ effect, and in part to late swing. People were less likely to report favouring an unfashionable option (spiral of silence) or flirted with the idea of the more hopeful but risky option only to get cold feet at the last moment (late swing). Applying these notions to the Scottish referendum again suggests the possibility of the final tally for Yes being lower than that in the polls.

So overall the evidence is mixed, but not balanced. It seems more likely that the headline poll figures are over- rather than under-estimating the vote for Scottish independence – and that this might be especially true of the final polls published between now and polling day.


Acknowledgements: Thanks to Alan Renwick of Reading University for much of the data and to both him and John Curtice for helpful comments.

Notes on the data: The referendums discussed above were Austria (2013), Ireland (2008, 2009, 2011, May 2012, Abolition of the Seanad 2013), New Zealand (2011), Northern Ireland (1998), Quebec (1995), Scotland (1979 and 1997), Sweden (2003) UK (1975, 2011), Wales (1997 and 2011). There are other referendums that I would like to add but for which I have not as yet tracked down any polling data. Any contributions much appreciated.

Topics: The Scottish independence referendum

About the author

Stephen Fisher is an Associate Professor of Political Sociology at the University of Oxford.