First, it’s been confirmed that the pollsters “herded”. Of course we knew the polls had mysteriously aligned in the final hours of the campaign – we could all see that for ourselves. But we know now – as some of us argued at the time – that this was not the basis of some ordinary statistical anomaly. As Professor Patrick Sturgiss, head of the inquiry, confirms: “A surprising feature of the 2015 election was the lack of variability across the polls in estimates of the difference in the Labour and Conservative vote shares. Having considered the available evidence, the Inquiry has been unable to rule out the possibility that ‘herding’ – whereby pollsters made design decisions that caused their polls to vary less than expected given their sample sizes – played a part in forming the statistical consensus.”

"The polls were wrong because the pollsters had – inaccurately – manipulated their own samples"

Professor Sturgiss also points out how this “statistical consensus” was arrived at. “The primary cause of the failure of the 2015 pre-election opinion polls was unrepresentativeness in the composition of the poll samples. The methods of sample recruitment used by the polling organisations resulted in systematic over-representation of Labour voters and under-representation of Conservative voters”. In other words, the pollsters inserted too many Labour voters into their polls, and not enough Conservatives.

On the surface this finding is an early contender for the “Statement Of The Blindingly Obvious, 2016” award. But it’s an important one. Remember what the polling industry’s own explanation was for its errors. “Lazy Labour” voters who couldn’t be bothered to turn up on polling day, was one theory. The old favourite “Shy Tories” was also trotted out. Then there was the idea there had been a “Late Swing” to the Tories. Some people even speculated efficient Tory “Micro-targeting” of key marginals may have in some way skewed the results.

It was all rubbish. The polls were wrong because the pollsters had – inaccurately – manipulated their own samples.

An electoral worker counts ballots as polls close in Britain's general election in 2015, at a counting centre in Sheffield.

This finding is also significant for a number of other reasons. Firstly, it raises the question of what the pollsters were trying to hide. The error in sampling was an obvious one. The morning after the election the pollsters had their own sampling models in front of them. They also had the actual results in front of them, and the result of the exit poll, which had proved largely accurate. Why has it taken seven months for the “truth” to come out?

Another important factor is changes in sampling are a well documented device pollsters use to deliberately manipulate their surveys to produce a desired result. Last May the New York Times published an interesting article on this very phenomenon. It found, for example, that in the 2012 US presidential election polling, company PPP, which is described as a “Democratic firm”, altered its sampling, so that when Barack Obama lost support amongst white voters more black voters were added to the sample. Although the company attempted to justify the changes on methodological grounds, at the time these changes were not explained in the firm’s own methodology statements.

Another example the Times identified related to the pollster Pew Research. “Pew Research’s final poll in 2012 showed Mr. Obama ahead by 6 points among registered voters, but only after an ad hoc decision to weight respondents based on how they said they voted in the 2008 presidential election. If Pew had weighted the poll in its usual way, Mr. Obama would have led by 11 points among registered voters.”

Prime Minister David Cameron and his wife Samantha are applauded by staff upon entering 10 Downing Street Photo: PA

Southampton University don’t themselves go so far as to claim evidence of deliberate manipulation of the 2015 result. Indeed, they go out of their way to state: “It is important to note that the possibility that herding took place need not imply malpractice on the part of polling organisations.”

But that’s precisely what the results imply. To believe deliberate herding did not take place, you have to believe the following:

Firstly, that every one of the polling companies independently and miraculously made the same methodological error. Secondly, that they not only made the same methodological error, but that it was made in such a way that it miraculously produced exactly the same margin between the Conservatives and Labour, despite the fact all the polls were of different samples, used different interview techniques, were conducted in different locations and over different time periods. Thirdly, you have to believe this alignment also just happened to miraculously occur around the very final poll of a five year election cycle. And fourthly, you also have to believe the – erroneous – result that was produced miraculously happened to be the most convenient for the pollsters themselves – namely, that the election outcome was “too close to call”.

"So we know the polls herded, we know how they herded and we know why they herded. The one thing we don’t know is precisely which firms were responsible for the herding"

In any case, we don’t need to speculate about whether pollsters manipulated their findings, because the pollsters have admitted it themselves. Survation announced the morning after the result that they had decided not to publish their own “final” poll of the campaign because – in the words of company CEO Damian Lyons Lowe – “the results seemed so “out of line” with all the polling conducted by ourselves and our peers – what poll commentators would term an “outlier” – that I “chickened out” of publishing the figures

So we know the polls herded, we know how they herded and we know why they herded. The one thing we don’t know is precisely which firms were responsible for the herding. It’s entirely likely – indeed it’s probable – that some firms actually got the result right. Similarly, it’s probable that some firms got the results wrong, but genuinely believed their results were accurate. The problem is Southampton University have chosen not to publish what their inquiry reveals about what happened to the methodology used by the individual polling companies in the final weeks and days of the campaign.

If the polling companies are serious about transparency, and they’re serious about restoring their battered reputations, they should ask Professor Sturgiss to ensure that when his final report is published, it highlights precisely that.