This month, we welcome a new member to the Cyber Technology Institute team.

Dr Francisco Aparicio Navarro received his B.Eng. degree in Telecommunications Engineering, specialising in Computer Networks, from the Technical University of Cartagena, Spain, in 2009, and his Ph.D. in Computer Network Security from Loughborough University, UK, in 2014.

His PhD research focused on the design of a novel Unsupervised and Self-adaptive Anomaly-based Intrusion Detection System, based on a Multi-metric Cross-layer Data Fusion architecture, able to provide real-time attack detection. The system developed during his Ph.D. was successfully licensed to a leading company in the defence sector.

From 2013 and 2016, he was a Research Associate in the School of Electronic, Electrical, and Systems Engineering, at Loughborough University, and from 2016 to 2018, he was a Research Associate in the School of Engineering, at Newcastle University, UK.

Between 2013 and 2018, he was part of the University Defence Research Collaboration in Signal Processing (UDRC) phase II (https://udrc.eng.ed.ac.uk/), project funded by Dstl/MoD and EPSRC. This video (https://www.youtube.com/watch?v=D7pKrhtWwRk) describes some of his work in the area of Multi-Stage Attack Detection Using Contextual Information.

He is an expert in the areas of Computer Networks, Cyber Security and Anomaly Detection. His research interests are in the areas of computer networks, cyber security, network security, intrusion detection, and network traffic analysis.

Dr Navarro joins us as a new Lecturer in Cyber Security from the beginning of July 2018. We are really pleased to welcome Francisco to our team!

As part of his ongoing efforts to ensure an economically viable post-Brexit Britain, Secretary of State for International Trade Liam Fox has recently released a new Cyber Security Export Strategy for the UK, targeting the period up to 2021. As the UK has experienced some eventful times since the previous strategy was released in early 2014, this was in principle a welcome move.

However, the strategy lacks convincing substance on the technological side, and targets the wrong countries. Liam Fox apparently does not want to admit that the UK’s largest cyber security export market of all is seriously at risk for multiple reasons.

How much, when, and where?

The numbers bear out that in the five year period up to 2016, the cyber security sector has grown massively. Estimates of the size of the world market have gone from between 35 and 120 billion pounds in 2011 to over 150 billion pounds now. UK exports in cyber security were stated to be some 800 million pounds in 2011, and for 2021 Liam Fox and his team are aiming for £2.6 billion. That sounds like a solid but ambitious target.

The new strategy mentions a number of target markets. Expansion is particularly aimed for in USA, the Gulf states, India, Japan, and South-East Asia. In 2016, these accounted together for less than 40% of the cyber security export market of the UK[1]. Of the total IT and telecommunications export for the UK in 2016[2], the US took 22%, Gulf states including Saudi Arabia 4%, India 1%, Japan 1%, Singapore 1%. The US is considered a mature rather than developing market, which means it would only ever grow slowly; even doubling the exports to all the other listed countries would increase total exports by only 7%.

So what were the target markets for the 2014 strategy[3]? Has targeting actually worked over the last period? There was special mention for Brazil because of the Olympics, using London 2012 as a cyber showcase for Rio 2016 – but the total share of IT and telecoms UK exports to Brazil is now 0.5%, having dropped dramatically since 2014. (Let’s see if the similar argument made now for Japan and its Olympics works out this time.) Malaysia was also mentioned because of its early identification of cyber security as an issue in the 1990s – in 2016 they were at 0.3% of the UK total IT exports, and slightly below the 2014 level. The Gulf states and India were targets in the previous round, too – with India dipping in recent years on total IT exports. So none of these have contributed much to the near-doubling of UK cyber security exports from £805M in 2011 to £1.5B in 2016.

Don’t mention the EU

With Liam Fox’s position on Brexit all too well-known, maybe it is no surprise that the EU is barely mentioned in the new cyber export strategy. Well – it gets two mentions, both in the context of regulation that the UK is subject to: on weapons exports, and on data protection. We will have to come back to the GDPR later. The importance of the EU to the UK’s cyber exports is evident from the figures. In 2016, the EU-27 counted for well over half of it. Of the total IT exports from the UK, they have been receiving some 40% over the last few years, with otherwise only the US achieving a double figure percentage. With the potential of significant trade barriers between the UK and the EU-27 after Brexit, this market has to be considered at serious risk now. Ironically, if any sector knows that strategy may be about avoiding disasters rather than about sketching rosy futures, it’s cyber security!

Interestingly, the lack of reference to the EU in the cyber export strategy is not a 2018 novelty. The 2014 strategy was also looking the other way – maybe justifiably as trade with Europe in the pre-Brexit days was not really perceived as “export”. So this strategy happily claimed US, China, Japan, and India taking up some 70% of UK cyber security exports between them – which could only be correct if the EU was excluded. Maybe an indication of how things felt only four years ago – exports to the EU running so smoothly that they were hardly noticed.

GDPR

Next, how can an opinion piece in computing from May 2018 be complete without considering the ominous GDPR? Liam Fox’s advert for his strategy in the New Statesman[4] is probably the exception. At least the export strategy acknowledges that “New regulation such as the EU’s General Data Protection Regulation is driving organisations to build information security into their wider strategy” – in a document which consistently reduces privacy and data protection to just data security.

However, here may be another area in which the strategy fails to consider a risk to the UK’s exports. Post Brexit, the UK will be implementing a new Data Protection Act which despite its faults[5] still closely matches the GDPR. If the UK were still an EU country, this would be enough for UK cyber businesses to be able to process personal data for European customers. However, with Britain outside the EU, an explicit decision on adequacy of the UK legislation will need to be taken, and the outcome is by no means a certainty according to the European Commission[6]. Doubts in this area relate to the wide ranging powers of internet surveillance and retention in the UK, but possibly also to exemptions slipped in to the Data Protection Bill at various stages.

Will this affect UK cyber security businesses? Certainly not all of them – hardware and many kinds of software contain and process no personal data, so such trade is largely impervious to the GDPR. However, where cyber security software overlaps with AI (another of the UK flagship IT industries according to the government line), and in the cyber intelligence analysis industry, where the market is set to grow dramatically, personal data is likely to play a role. An adjudged lack of data protection in the UK may stop UK companies from successfully providing such services to EU customers, for example in the cloud. So it’s not just “no-deal” and other possible trade barriers that contribute a Brexit risk to the UK cyber industry.

So what is in it?

The strategy certainly contains some interesting insights. For example, “the rise in disruptive digital technologies” is held responsible for the discovery of vulnerabilities, when we had been assuming it was due to ancient bugs, badly designed interfaces, and unimaginative attacker models.

Of course it couldn’t avoid mentioning the UK government’s £1.9B investment in cyber security – Fox’s New Statesman piece even took that for its title. We can’t really tell how much of it has been spent already – but given that it was first announced in 2016 we should hope the pot has been emptied somewhat by now. Much of the export strategy reiterates elements of this old overall strategy, including work on the academic research side that has only a very thin connection to exports, and a picture of the shiny new National Cyber Security Centre building.

The Department for International Trade’s main activities will be “Pursue”, “Enable”, and “Respond”. These represent targeting governments with their CNI (critical national infrastructure), bespoke offers in specific sectors (government; finance; automotive; health; energy and CNI; infrastructure), and rebranded marketing with general exporting advice, respectively. None of the export advice sounds revolutionary: regional representatives, trade fairs, and mentored “growth mindsets” for SMEs.

A vision of where the thematic growth in the UK cyber security industry might or should be is mostly lacking, summarised in the document as “The Digitisation of Everything”. There are brief mentions of AI and the recent government initiatives in that area. We are told that blockchain is “entirely web-based”, and has commercially available applications in “personal identification” – the one area where exports indeed had better be outside the EU, as the GDPR precludes its use for personal data.

Overall the UK government is presenting a cyber security export strategy which ignores its main export market despite it being under serious threat. Given that this threat is mostly of the politicians’ own making, the blinkered view of the world was maybe unavoidable. This still should not have stopped them from deepening the thematic vision and long term strategy for the UK cyber industry. Privacy by design, smart cities, assisted living, and internet of things, for example, are all areas with security dimensions and significant potential within the UK that do not even get a mention. Given world-wide growth in demand, cyber security exports outside the EU will likely grow, but it is not clear whether and how this strategy contributes to that.

This blog post was written by Professor Eerke Boiten, Director of the Cyber Technology Institute at De Montfort University.

[2] Office for National Statistics: Trade in services by country and type of service 2014 to 2016, https://www.ons.gov.uk/economy/nationalaccounts/balanceofpayments/adhocs/008172tradeinservicesbycountryandtypeofservice2014to2016

Here you can read his responses in full to the questions raised in this interview:

Even after the Cambridge Analytica scandal, how safe is our Facebook data? For instance, how do we know our info isn’t used again and again when it comes to FB custom audience/profiling?

EB: Facebook haven’t changed anything substantive since the Cambridge Analytica scandal. They still do profiling on their customers, on all kinds of criteria including sensitive. This means that companies can still market via FB on the basis of race, or on the basis of mental stability. Even when such routes are not directly available, “lookalike” audiences can be created to market to people with similar views and interests. They are trying to stop “political” advertising around particular elections and referenda, but the stories coming out of that suggest they don’t really know yet how to even detect political advertising. A lot of the things FB have said around the CA scandal have been proved to be incorrect, for example that they stopped the sharing of friends’ info via apps as soon as they found out it was being abused.

What steps, in your opinion, would actually make our data safer?

EB: Now this is where GDPR should make a difference. Companies have to give insight into what they do with people’s data, and show that they can justify what they are doing with it. Experiments relating to mental health, like Facebook have done in the past, would need very explicit permission from the guinea pigs – which they probably wouldn’t give. The problem is that Facebook, Google, and the like have become so large that it is very hard for anyone to properly inspect all of what they are doing. At the moment, we can only look at what creeps out at the seams, along the line of: “if it turns out they’re able to do this, internally they must be applying an algorithm which does profiling for that”. So a significant increase in budget for organisations like the ICO would be essential to keep the internet giants in line.

Should we – digital natives – just resign ourselves to giving over all of this information about ourselves? It’s become so accepted but does it have to be this way?

EB: The problem isn’t even with the information that we give away itself. Most of us know how to apply the privacy settings that make sure it doesn’t get any further than we want it to go. The CA story was a scandal for many people because it violated their expectations about such control of their data: apps on someone’s Facebook leaking information about their friends without permission.

The main problem is with information that is not knowingly given away, such as Facebook like buttons and cookies tracking our web browsing, or Google Maps recording our every movement – and with the information that can be deduced from such tracking on the internet or in the real world. It’s hard to even be aware of how much such tracking exists, and you certainly don’t get many privacy controls on how it is used or passed on. For this, the GDPR should help too, but again it’s hard to enforce a law against such large scale processing by large companies that mostly sit outside the UK and the EU.

An EU-wide cyber security law is due to come into force in May to ensure that organisations providing critical national infrastructure services have robust systems in place to withstand cyber attacks.

The legislation will insist on a set of cyber security standards that adequately address events such as last year’s WannaCry ransomware attack, which crippled some ill-prepared NHS services across England.

But, after a consultation process in the UK ended last autumn, the government had been silent until now on its implementation plans for the forthcoming law.

The NIS Directive (Security of Network and Information Systems) was adopted by the European parliament in July 2016. Member states, which for now includes the UK, were given “21 months to transpose the directive into their national laws and six months more to identify operators of essential services.”

The Department for Digital, Culture, Media and Sport (DCMS) finally slipped out its plans on a Sunday, but – given its spin on fines – it doesn’t seem as though the government was attempting to bury the story.

Interesting spin

The DCMS warned – in rather alarmist language – that “organisations risk fines of up to £17m if they do not have effective cybersecurity measures” in place. There are echoes of the EU’s General Data Protection Regulation (GDPR), by matching its €20m (£17m) maximum penalty level – though the option to charge 4% of turnover for NIS as well was dropped after consultation.

However, exorbitant penalties have been used as a scare tactic by GDPR snake oil salesmen, despite clear statements from the Information Commissioner’s Office (ICO) indicating a cautious regime. Did the DCMS mean to invite overblown headlines about the NIS directive, too?

Another peculiarity is that the government announcement doesn’t once mention the EU. Instead, the NIS directive is presented as an important part of the UK Cyber Security Strategy, even though it is an EU initiative. A pattern is emerging here: the removal of mobile roaming fees, a ban on hidden credit card charges and environmental initiatives have all been claimed as UK policies by Theresa May’s government without any adequate attribution to the EU. Digital minister Margot James said:

We are setting out new and robust cybersecurity measures to help ensure the UK is the safest place in the world to live and be online. We want our essential services and infrastructure to be primed and ready to tackle cyber-attacks and be resilient against major disruption to services.

Who needs to be aware of the NIS directive?

The government consultation response clarifies which operators of essential services and digital service providers the directive will apply to, once transposed into UK law. It uses a narrow definition of “essential”, excluding sectors such as government and food. Small firms are mostly excused from compliance; nuclear power generation has been left out, presumably to cover it exclusively under national security; and electricity generators are excluded from compliance if they don’t have smart metering in place. Digital service providers expected to comply with the NIS directive include cloud services (such as those providing data storage or email), online marketplaces and search engines.

The law requires one or more “competent authorities”, which the UK plans to organise by sector. It means communications regulator Ofcom will oversee digital infrastructure businesses and data watchdog the ICO will regulate digital service providers. They will receive reports on incidents, give directions to operators and set appropriate fines.

It’s worth noting that the ICO, in its multiple roles, could fine a service provider twice for different aspects of the same incident – once due to non-compliance with NIS and once due to non-compliance with GDPR. But incidents need to be considered significant in order to be on the radar for this directive. It will be judged on the number of affected users, the duration and geographical spread of any disruption and the severity of the impact.

Clearly, once this legislation is in place, the next WannaCry-style incident will be closely scrutinised by regulators to see how well prepared organisations are to deal with such a major event.

National and international coordination

The coordination of many NIS activities falls to the UK’s National Cyber Security Centre (NCSC), part of the government’s surveillance agency, GCHQ. It will provide the centralised computer security incident response team (CSIRT), and act as the “single point of contact” to collaborate with international peers as a major cyber attack unfolds. The NCSC will play a central role in reporting and analysing incidents, but remains out of the loop on enforcing the law and fines.

Sharing cyber incident information within an industry sector or internationally is important for larger scale analysis and better overall resilience. However, there are risks due to the inclusion of cyber vulnerability implications, business critical information and personal data in such sensitive reports. Two EU research projects (NeCS and C3ISP) aim to address these risks through the use of privacy preserving methods and security policies. The C3ISP project says its “mission is to define a collaborative and confidential information sharing, analysis and protection framework as a service for cybersecurity management.”

More security standards?

The idea of having prescriptive rules per sector was considered and rejected during the UK’s consultation process on the NIS directive. It’s in line with how the GDPR imposes cybersecurity requirements for personal data: it consistently refers to “appropriate technical and organisational measures” to achieve security, without pinning it down to specifics. Such an approach should help with obtaining organisational involvement that goes beyond a compliance culture.

A set of 14 guiding principles were drawn up, with the NCSC providing detailed advice including helpful links to existing cybersecurity standards. However, the cyber assessment framework, originally promised for release in January this year, won’t be published by the NCSC until late April – a matter of days before the NIS comes into force.

Nonetheless, the NIS directive presents a good drive to improve standards for cybersecurity in essential services, and it is supported by sensible advice from the NCSC with more to come. It would be a shame if the positive aspects of this ended up obscured by hype and panic over fines.

This blog post was written by Eerke Boiten, Professor of Cyber Security in the Cyber Technology Institute, De Montfort University.

Cyber warfare is upon us, from interference in elections to a leak of cyber weapons from a national stockpile. And, as with most evolutions in warfare, the world is largely unprepared. Cyber peacekeeping presents significant challenges, which we explore in our research.

Any theatre of war now includes cyberspace. It has been used in targeted attacks to disable an adversary’s capabilities, such as Stuxnet, where Iran’s ability to enrich weapon-grade Uranium was disrupted. It can also be exploited in traditional warfare through electronic interference with intelligence and communication systems.

With little to guide nations and scant experience to build upon, many states are having to learn the hard way. In the context of warfare, it takes a long time to understand the impact of new technologies. One only need look at the example of landmines to see why. Once considered a legitimate weapon to stifle enemy movement, most countries now agree that landmines are indiscriminate and disproportionate weapons that cause civilian suffering long after a conflict has ended.

It’s possible that cyber warfare holds unknown consequences that future world leaders will agree to ban for similar, gut-wrenching reasons in the aftermath.

There are, however, efforts to fill the gaps in knowledge. Researchers, such as my colleague Michael Robinson, have attempted to characterise cyber warfare to understand how it can be effectively and ethically conducted. These include efforts to create cyber warfare laws to the control and restriction of cyber weapons.

These efforts are beginning to bear fruit, with the Tallinn Manual – first published in 2013 – offering a comprehensive analysis of how existing international law applies to cyberspace.

Stop the fight

But while a large proportion of research focuses on how to conduct cyber warfare, there is very little research on restoring peace in the aftermath of an online conflict between nation states.

Just as we cannot expect a nation to spring back to peace and prosperity following years of boots-on-the-ground war, countries affected by prolonged periods of cyber warfare also need assistance to recover.

A nation’s reliance on critical infrastructure brings the need to understand the damage cyber warfare can inflict on a society into sharp focus. Computer systems running essential services at hospitals, nuclear power plants and water treatment plants may be infected with advanced malware, which resists removal and prolongs civilian suffering – much like landmines persist long after a conflict ends. The physical effects of cyber weapons make cyber peacekeeping a key enabler to help bring about lasting peace.

After a conventional conflict, interventions to restore peace and security are performed on the international stage. The United Nations (UN), with its white vehicles and blue helmets, is the most widely recognised peacekeeping organisation. It has a long history of maintaining peace around the world and has evolved to match the shifting nature of warfare from inter-state to intra-state conflict over the years.

UN peacekeepers were initially ill-equipped to deal with such a change, which led to high profile failures such as Rwanda and Somalia.

With the rise of cyber warfare, peacekeepers will increasingly have to operate in this domain. But are the UN and similar organisations prepared for this expected onslaught or will they suffer a repeat of past failures, having been caught out by changes in the nature of conflict? Protracted UN cyber warfare talks fell apart last year because a consensus couldn’t be reached amid suspicions that reportedly mirrored the Cold War era. Nonetheless, questions must be asked of the UN’s peacekeeping strategy on its readiness to tackle cyber threats.

Peace is the word

Can existing peacekeeping activities simply be adapted for the internet, or should a completely new framework be drawn up to adequately address how to maintain or restore order online? What kind of technical obstacles will cyber peacekeepers encounter? Could they achieve something that contributes towards restoring or maintaining peace?

Disarmament illustrates these operational problems well: the destruction or confiscation of physical armoury means that assets cannot be easily replaced by a warring faction should peace efforts stall or falter. Cyber weapons are predominantly software applications that can be replicated, archived, encrypted and passed on with almost no cost or significant logistic efforts, research shows.

The effectiveness of cyber weapons diminishes once the vulnerabilities they have exploited become known, so one approach would be to publish detected cyber weapons to render them obsolete. Responsible disclosure would allow vendors to come up with fixes and give potential victims a chance to apply the patches – which can be a lengthy process.

Doing so “destroys” all cyber weapons of this kind – regardless of whether they belong to any of the warring factions. This approach has a nasty side-effect: it inadvertently leads to a proliferation of cyber weapons, because it’s easier for other nations or criminals to acquire the technology before adequate protections can be put in place on a global scale. It also throws up political challenges.

Conventionality belongs to yesterday

It’s no secret that the UN struggles to find money for peacekeeping contributions. The US, the largest contributor to the UN budget by far, has – under president Trump – disagreed with how the organisation is governed, and confirmed it will reduce payments to the peacekeeping budget.

If securing troops under difficult budget restrictions is already difficult, then securing highly-skilled cyber personnel in a competitive global market will be even more challenging.

United Nations peacekeepers wear distinctive blue helmets and drive white vehicles in regions ravaged by war. Shutterstock
And there’s an additional complication: those countries conducting cyber warfare are the advanced nations, many of which already contribute the lion’s share of UN funding and possess the greatest cyber expertise. Would they be willing to contribute their knowledge, wealth and people to aid their adversaries?

Conflict affects every nation, so it’s in everyone’s interests to have an internationally available capability to restore peace and security in the aftermath of cyber warfare.

This blog post was written by Helge Janicke, Professor of Computer Science, Head of the of School Computer Science and Informatics at De Montfort University.

Is finding out that users don’t comply with the policy a nightmare scenario for an IT security officer, for example of the House of Commons? Hardly. Unless you find out through Twitter, of course, along with the rest of the world (See: https://twitter.com/NadineDorries/status/937019367572803590)

A policy that only demands self-evident behaviour does not contribute, and probably does not solve a problem. For a realistic policy in an ever changing cyber security landscape, you should expect some aspects of compliance to be strenuous initially, and more of them over time. It is counterproductive to assume that security versus utility is a zero-sum game, but trade-offs are always likely. The research area of “usable security” works to minimise this effect.

So you have to monitor policy compliance. Probably not through social media research, though. It would be interesting to see how compliance gets checked in the House of Commons. There is a decent chance that there’s education and advice but otherwise reliance on individual MPs’ responsibility. That worked for everything including MPs’ expense claims, until we realised that it didn’t. To complicate things, IT security where it concerns the Data Protection Act does devolve to individual MPs, as they are all separate data controllers.

IT security policy compliance should be monitored to cover the risks that the policy is supposed to mitigate. Business should normally link non-compliance to disciplinary procedures. As some tweets said this week, sharing logins is a sacking offence in some businesses. Non-compliance can also be an indication of changes in cyber risks and risk perception, and changes in business processes – so the exact areas of non-compliance may just be where the security policy needs to reflect such changes.

Most of all, however, usable security research tells us what the ultimate value of non-compliance information is: it indicates where users have found security too burdensome, and where they have found their own workarounds. This is also known as “shadow security”. This creates the seams through which cyber risks can come into the organisation.

Is the password for the shared drive too hard to remember? Sharing logins is one solution for sharing files. Another is to use the cloud (Dropbox, Google Drive, etc) or worse: a USB stick. So links to just about anywhere on the internet can refer to official documents – or not –, and a USB stick casually passed on can contain important official information. And be lost on the train. All this normalises dubious cyber hygiene.

Is communication by email not secure enough, maybe because emails can even be read by interns on exchange programmes? Create a WhatsApp group for gossip or conspiracy. If the Honourable Member for Backwardbury South defects to the opposition or turns out to be on Putin’s pay list, whose responsibility is it to remove them from the group? Presumably there’s no harm in Facebook knowing who is in the gang either?

These examples should give some indication of the value of knowing about non-compliance with security policy. The response is not simply to shout at the users for misbehaving – it is also to explore where business and security procedures can be integrated in a more usable way.

That does not provide an excuse for the recent behaviour of Nadine Dorries and other MPs. She didn’t exactly raise login sharing as an example of unworkable IT and its workarounds. Rather, it was to make a public argument to dissipate Damian Green’s responsibility for the porn that had been found on his work computer. From an information security perspective, that is inexcusable – and that point of view should be supported by management. One role of logins is to represent a user’s permissions, responsibilities and actions in an IT system in a way that makes them checkable, recordable and auditable. Morally if not also legally, a user should always remain responsible for what is done using their login – the more so if it is willingly shared. Dorries’ alternative for the “maybe his login was hacked” excuse was ill-considered for that reason alone.

This blog post was written by Eerke Boiten, Professor of Cyber Security in the Cyber Technology Institute, De Montfort University.

Professor Eerke Boiten joined the Cyber Technology Institute in April 2017 from the University of Kent where he was the Director of the Cyber Security Research Centre.

Professor Boiten spent the first twenty years of his research career, first in the Netherlands and then in the UK, on mathematics and logic based methods to guarantee and verify the correctness of software. He published over 50 peer reviewed papers on formal methods, including program transformation, viewpoint specification, and refinement in process algebra and state-based systems (e.g. Z). On the latter topic, he authored the monograph “Refinement in Z and Object-Z” with John Derrick (Springer 2004, 2015), and organised many conferences and workshops including the last nine editions of the BCS-FACS Refinement Workshop.

In recent years, he has been applying such techniques in the context of cryptography and security. He led the highly successful UK network on cryptography, security and formal methods CryptoForma. In addressing the broader cyber security research agenda, he also actively engages with other disciplines and external stakeholders.

Professor Boiten has also been a frequent commentator on issues in data security and privacy, including in The Guardian, Le Monde, and frequently in The Conversation, see: https://theconversation.com/profiles/eerke-boiten-104676/. Recent comment topics have included: health data sharing, Google, Facebook, the Right to be Forgotten, surveillance, encryption and Ransomware.

Dr Isabel Wagner is a Senior Lecturer in the Cyber Technology Institute here at De Montfort University. She completed her PhD in engineering (Dr.-Ing.) and M.Sc. in computer science (Dipl.-Inf. Univ.) from the Department of Computer Science, University of Erlangen in 2010 and 2005, respectively. In 2011 she was a JSPS Postdoctoral Fellow in the research group of Prof. Masayuki Murata at the University of Osaka, Japan.

Dr Wagner has made significant contributions in wireless sensor networks, computing education, and privacy-enhancing technologies. These diverse contributions are united by a focus on measurement and the application of simulation methodology and statistics. Dr Wagner’s work has been published in renowned peer-reviewed journals and conferences and has been cited more than 900 times (Google Scholar).

The following examples illustrate the results of her outstanding research:

In the area of wireless sensor networks, Dr Wagner proposed a new metric for the lifetime of sensor networks. This highly cited work (currently the 6th most-cited paper in ACM Trans. on Sensor Networks) analysed metrics and application scenarios for sensor networks, and proposed a composite metric that can be configured based on the requirements of the application scenario. This metric enables objective comparisons between different algorithms and configurations of sensor networks.

In computing education, Dr Wagner has focused on gender equality. In a large statistical study of the achievement of female CS students, she found that across all UK universities, female CS students are awarded significantly fewer first class degrees (corresponding to a 70% average) than male students (published in ACM Trans. on Computing Education).
This result is now informing her local work in supporting female students and making staff aware of unconscious biases.

In the area of privacy-enhancing technologies, Dr Wagner has investigated the measurement of privacy as a prerequisite for objective comparisons between privacy-enhancing technologies. She has proposed a taxonomy for privacy metrics and a general method to assess the strength of privacy metrics. Her study of privacy metrics for genomic privacy (published in ACM Trans. on Privacy and Security) evaluated 24 privacy metrics for genomics and found weaknesses in several common privacy metrics.

Her research has been funded by the Engineering and Physical Sciences Research Council (EPSRC), the Japan Society for the Promotion of Science (JSPS), and major companies. She also acts as an expert reviewer for the EPSRC, the EU Horizon 2020 programme, and several high-ranking journals and serves on the technical program committees of leading conferences.

Yes, you can. Having said that, for the NHS it was probably a bit more difficult to avoid it.

After last weekend, it is hardly necessary to explain what ransomware is anymore – even if not all media got the details correct. Ransomware is a particular type of malicious software (“malware”), that asks for a ransom to get the affected computer back to its original state. Like most ransomware, the current variant (“WannaCry”) replaces the user’s data files by encrypted versions for which only the criminals have the decryption key. Often such ransoms need to be paid in the online currency “bitcoin”. This means that even paying the ransom is a challenging experience for many of the victims, with the criminals often offering help (!) This is part of the game: the criminals need their victims to build up some trust, so they will also trust the criminals to deliver when they pay up. Nevertheless the official advice is still not to pay, as you can never be sure, and nobody likes to support this particular “business” model. As far as we can tell nobody has even received a decryption key after paying for this particular infection.

So how could you land with ransomware on your computer?

Old software, missing updates, clicking the wrong links …

All malware relies on “vulnerabilities” in software for the malware to take hold. In this case, it was a vulnerability in Microsoft operating systems, for which updates had been sent out in March 2017. Nobody who applied those updates will have been hit by WannaCry. Unfortunately, public free support for Windows XP (not sold since 2008) had stopped in 2014, so no free update for that was available. The vulnerability exists in Windows XP, too, and Microsoft had a fix available – initially for a price, but as of this weekend this is also available for free.

The existence of a vulnerability by itself will not normally lead to ransomware infection – it also needed some action by a user. The most common such action these days is clicking on a “wrong” link in an email which looks like it comes from a trusted source (“phishing”, or if it’s cleverly targeted, “spear fishing”). Unfortunately, there is an “arms race” in this area: criminals get better at creating realistic looking emails, so even though users are more aware of the risks, they also stand a worse chance of spotting the best phishing emails than ever before. With all sorts of internet services regularly sending out emails with bona fide links in there, this is a problem that will need a radical solution soon.

The NHS, despite a huge IT budget, was always at a higher risk of catching this strand of ransomware than most people at home. Many of their computers still run on Windows XP, so would not have been updated in time. In many cases, moving away from XP for the NHS (and many other large organisations) is not just a question of simple replacement cost. They also have crucial software that will not work with newer operating systems, or worse: an XP based computer may actually be built into a complex medical instrument. Replacing those in their entirety is a much bigger job, and even having had extended XP support over 2014-15 it is not clear the NHS could have realistically done so by now. Most home computers on XP have probably long been retired because they were getting too slow for the newest games …

This aspect of the story won’t go away with Microsoft releasing an XP update to combat WannaCry. Every update released for newer Microsoft operating systems addresses and through that implicitly publicizes a vulnerability that may have existed in XP already, with no free public updates provided for that …

Another very political can of worms in this story is that the vulnerability had been known to the NSA, held in their stash of vulnerabilities to exploit when they needed to break into people’s computers. The NSA will likely have known about this one since well before XP support was stopped.

Can you be safe even if you’ve been hit by ransomware?

Yes, provided you had backups of your data. That has always been a good strategy – disc drives can crash, laptops can get stolen, and in this case having a backup allows you to put the original files in place again instead of the maliciously encrypted ones. Because you also need to get rid of the malware, and you need to avoid re-infecting yourself and others, this is a task that should not be undertaken without expertise.

Current Research

Cyber security researchers are working on research to address all this in various directions, often with interdisciplinary aspects as some of it relates to how humans operate and can be manipulated. Ransomware encryption methods are broken, bitcoin payments on the blockchain are traced, email filtering gets improved to catch more phishing emails.

Funded by the national research funding agency EPSRC, Professor Eerke Boiten at the CTI is leading EMPHASIS, a £900K research project into all aspects of ransomware, with computer scientists, economists, psychologists and criminologists from the universities of Kent, Leeds and Newcastle, De Montfort University and City University London.

This blog post was written by Professor Eerke Boiten, Professor of Cyber Security at the Cyber Technology Institute, De Montfort University, Leicester.

Cyber Security of ICS/SCADA systems is a major aspect of current research in the cyber community. Here at the Cyber Technology Institute, we have developed CYRAN – a hybrid cyber range that is a combination of physical and virtual components which is an ideal environment for hands-on training in cyber warfare training, cyber resilience testing and cyber technology development.

A key challenge in Cyber Security training is the ability to perform practical exercises in a realistic environment, especially for areas where the ability to incorporate real equipment is almost non-existent.

To this end, the Cyber Technology Institute at De Montfort University have created the CYRAN cyber range. CYRAN has been developed utilising a hybrid approach, combining virtualised components with actual physical hardware. This includes the capacity for switches, routers, user terminals with a variety of operating systems, programmable logic controllers, human machine interfaces, geographically distributed networks and virtual private networks.

Scenarios can be developed to better represent operational environments by incorporating physical systems such as control systems and bespoke technologies, providing enhanced resiliency testing.

Once a scenario has been developed Red vs Blue exercises (where one team attack the system and the other attempt to identify and attribute the attacks) can be performed highlighting areas of weakness likely to be exploited by malicious actors and assessing the level of information required for successful attribution. Tokens worth a predetermined number of points are spread throughout the scenario and are associated with particular techniques or exploits.

This approach introduces an element of competition, which can be tailored to assess the impact of differing schemes. Competition can be simply between Red and Blue, but provision exists to monitor individual points meaning competition within teams can also be assessed. Any combination of these can also be implemented; one that has proved successful in the past is to award Blue points solely to the team whilst awarding individual points to the Red team, leading to greater teamwork amongst the defenders whilst highlighting individuality for the attackers.

A key component of a scenario is the White team; not only do they ensure the smooth running of the event providing hints or extra information when necessary, but they can also take on the role of other members of an organisation to increase the realistic demands of a situation.

With CYRAN, we can provide attendees with practical and technical skills as well as the experience of working with others within a simulated scenario. It is also easy to create and add new scenarios in order to tailor the training to the specific needs of organisations.