Far from producing faster, more accurate, results, the eCounting was slower and left everyone concerned with serious misgivings — and no confidence whatsoever that the results were correct.

Today ORG launches its collated report into all of the various eVoting and eCounting experiments that took place in May — documenting the fiascos that occurred not only in Bedford but also in every other place that ORG observed. Their headline conclusion is “The Open Rights Group cannot express confidence in the results for areas observed” — which is pretty damning.

In Bedford, we noted that prior to the shambles on the 4th of May the politicians and voters we talked to were fairly positive about “e” elections — seeing it as inevitable progress. When things started to go wrong they then changed their minds…

However, there isn’t any “progress” here, and almost everyone technical who has looked at voting systems is concerned about them. The systems don’t work very well, they are inflexible, they are poorly tested and they are badly designed — and then when legitimate doubts are raised as to their integrity there is no way to examine the systems to determine that they’re working as one would hope.

We rather suspect that people are scared of being seen as Luddites if they don’t embrace “new technology” — whereas more technical people, who are more confident of their knowledge, are prepared to assess these systems on their merits, find them sadly lacking, and then speak up without being scared that they’ll be seen as ignorant.

The ORG report should go some way to helping everyone understand a little more about the current, lamentable, state of the art — and, if only just a little common sense is brought to bear, should help kill off e-Elections in the UK for a generation.

They revised their proposals a bit — in the face of considerablelobbying about so-called “dual-use” tools. These are programs that might be used by security professionals to check if machines were secure, and by criminals to look for the insecure ones to break into. In fact, most of the tools on a professionals laptop, from nmap through wireshark to perl could be used for both good and bad purposes.

The final wording means that to succesfully prosecute the author of a tool you must show that they intended it to be used to commit computer crime; and intent would also have to be proved for obtaining, adapting, supplying or offering to supply … so most security professionals have nothing to worry about — in theory, in practice of course being accused of wickedness and having to convince a jury that there was no intent would be pretty traumatic!

The most important issue that the Home Office refused to concede was the distribution offence. The offence is to "supply or offer to supply, believing that it is likely to be used to commit, or to assist in the commission of [a Computer Misuse Act s1/s3 offence]". The Home Office claim that “likely” means “more than a 50% chance” (apparently there’s caselaw on what likely means in a statute).

This is of course entirely unsatisfactory — you can run a website for people to download nmap for years without problems, then if one day you look at your weblogs and find that everyone in Ruritania (a well-known Eastern European criminal paradise) is downloading from you, then suddenly you’re committing an offence. Of course, if you didn’t look at your logs then you would not know — and maybe the lack of mens rea will get you off ? (IANAL ! so take advice before trying this at home!)

The hacking tools offences were added to the Computer Misuse Act 1990 (CMA), along with other changes to make it clear that DDoS is illegal, and along with changes to the tariffs on other offences to make them much more serious — and extraditable.

Every so often I set an exam question to which I actually want to know the answer. A few years back, when the National Lottery franchise was up for tender, I asked students how to cheat at the lottery; the answers were both entertaining and instructive. Having a lot of bright youngsters think about a problem under stress for half an hour gives you rapid, massively-parallel requirements engineering.

This year I asked about phishing: here’s the question. When I set it in February, an important question for the banks was whether to combat phishing with two-factor authentication (give customers a handheld password calculator, as Coutts does) or two-channel authentication (send them an SMS when they make a sensitive transaction, saying for example “if you really meant to send $4000 to Latvia, please enter the code 4715 in your browser now”).

At least two large UK banks are planning to go two-factor – despite eight-figure costs, the ease of real-time man-in-the-middle attacks, and other problems described here and here. Some banks have thought of two-channel but took fright at the prospect that customers might find it hard to use and deluge their call centres. So I set phishing as an exam question, inviting candidates to select two protection mechanisms from a list of four.

The overwhelming majority of the 34 students who answered the question chose two-channel as one of their mechanisms. I’ve recently become convinced this is the right answer, because of feedback from early adopter banks overseas who have experienced no significant usability problems. It was interesting to have this insight confirmed by the “wisdom of crowds”; I’d only got the feedback in the last month or so, and had not told the students.

Ross

PS: there’s always some obiter dictum that gives an insight into youth psychology. Here it was the candidate who said the bank should use SSL client certificates plus SMS notification, as that gives you three-factor authentication: something you know (your password), something you have (your SSL cert) and something you are (your phone). So now we know 🙂