There's been a newsworthy headline about this report, that there was foreign interference, some sort of denial of service attack. To check this, let's look at the meat of the report:-

88. On 7 June 2016, hours before the deadline for individuals to register to vote at the EU referendum, the voter registration website crashed. The collapse of the website, managed by the Cabinet Office and the Government Digital Service, was blamed on “unprecedented demand” for the service, with 515,256 online applications to register to vote recorded on 7 June (the previous record for the largest number of online applications received in a day was 469,047 on 20 April 2015).

For a website that was covering a user base of around 50m people, many of whom were not registered, 515,256 is high, but not exceptionally so. It's at a level that I'd have expected someone to test for.

This is more interesting:-

94. In addition, the report found that performance testing of the website was “limited, and the conclusions drawn from the results were not sufficiently detailed or tested”, with mistaken assumptions about the likely traffic on the website. Indeed, while load testing did result in system performance problems, “it was assumed that such a load would not occur”. Furthermore, the report found that performance tests “did not continue to the point of destruction - which would have flagged up the system’s breaking point in advance”

So, there were performance tests to be done, they failed, but you still went ahead? No-one thought this was a problem, even though they'd written them, and no-one thought to have any sort of contingency in case a supplier's tests failed. The report also talks about a lack of automated testing, which is nothing short of scandalous for any sort of large-scale website built today.

I'm going to go with "project managers who didn't do their job in Whitehall" over "black hat hackers in ushankas".

There's been a newsworthy headline about this report, that there was foreign interference, some sort of denial of service attack. To check this, let's look at the meat of the report:-

88. On 7 June 2016, hours before the deadline for individuals to register to vote at the EU referendum, the voter registration website crashed. The collapse of the website, managed by the Cabinet Office and the Government Digital Service, was blamed on “unprecedented demand” for the service, with 515,256 online applications to register to vote recorded on 7 June (the previous record for the largest number of online applications received in a day was 469,047 on 20 April 2015).

For a website that was covering a user base of around 50m people, many of whom were not registered, 515,256 is high, but not exceptionally so. It's at a level that I'd have expected someone to test for.

This is more interesting:-

94. In addition, the report found that performance testing of the website was “limited, and the conclusions drawn from the results were not sufficiently detailed or tested”, with mistaken assumptions about the likely traffic on the website. Indeed, while load testing did result in system performance problems, “it was assumed that such a load would not occur”. Furthermore, the report found that performance tests “did not continue to the point of destruction - which would have flagged up the system’s breaking point in advance”

So, there were performance tests to be done, they failed, but you still went ahead? No-one thought this was a problem, even though they'd written them, and no-one thought to have any sort of contingency in case a supplier's tests failed. The report also talks about a lack of automated testing, which is nothing short of scandalous for any sort of large-scale website built today.

I'm going to go with "project managers who didn't do their job in Whitehall" over "black hat hackers in ushankas".

About author

Tim Almond is a software consultant specialising in web applications for the ASP.NET stack.