The Spanish Guardia Civil has chosen Chemring Technology Solutions’ Perception Cyber Security to protect its critical network assets from cyber-attacks, as well identify malicious insiders or other vulnerabilities within the network. The new contract follows a successful product evaluation by Perception and its Spanish partner Eleycon21.

Perception was originally developed for the UK Ministry of Defence and is the world’s first bio-inspired network security system. Once deployed, Perception will complement the Guardia Civil’s existing computer network security systems by identifying the potential threats they cannot.

Eleycon21 distribute and support the Perception product throughout Spain. Gabriel Crespo, Managing Director of Eleycon21, said: “Perception offers a ground-breaking approach to identifying advanced cyber threats and it will deliver the Guardia Civil a distinct advantage. We are therefore delighted to be partnering Perception Cyber Security in the delivery and support of its technology in Spain.”

As Perception is a network behaviour analysis system, it has no rigid “rules-based” architecture and adapts to the network’s changing profile to automatically identify malicious activity, making it more difficult for malware to evade detection. It will also detect the slow, unauthorised external extraction of information from the network, even when sophisticated obfuscation techniques are used.

Daniel Driver, Head of Perception Cyber Security, said: “Eleycon21 has an in-depth knowledge of the dangers posed by today’s more sophisticated network security threats, and they are committed to ensuring that Spain’s leading organisations have the robust cyber protection required to combat them. Their work alongside Guardia Civil in deploying Perception is a demonstration of their commitment to this endeavour and we are delighted to support them.”

Any information security strategy must be defined to support the growth and direction of the organisation. This strategy should look at all the risks that may impact the organisation and implement a strategy to mitigate those risks. Today, these risks are far more diverse and varied, and as such a mix of technical and non-technical controls to safeguard the business, its data and its ability to operate. It is critical to develop a strategy that mitigates or transfers as much risk as possible while keeping the cost and disruption as reasonable as possible. As a result, a mix of multiple different security measures need to be taken to mitigate the relevant risks as efficiently as possible. Every measure will naturally have its blind spots and weaknesses, and each of these must be covered by another system to mitigate those weaknesses. Understandably then, when setting up a network security system, the risks, threats and impact must be understood with as much detail as possible and controls applied only where it makes financial sense and/or there is a regulatory demand.

So we have a multi-product, layered approach to network protection, but there are still some serious questions that must be asked when deploying these solutions across physical security, technical security, and administrative measures. This article was written to collate some of those questions that might be forgotten during this process.

Physical

Physical controls are a first line of defence and range from access controls such as doors, locks, passwords, signage, and security guards to site facilities such as power, HVAC, and resilient services to ensure that service remains uninterrupted.

Do I know who is accessing my physical network?

It is all too easy in many businesses to be able to walk in to a room and just plug in to a spare RJ45 network connection box on the wall, this could potentially give a vantage point in to your network. It is important to understand what is patched where and also to properly disconnect or limit access to physical connections. In some cases a physical audit may be necessary to ensure that you have ensured what you think is plugged in is actually plugged in.

Do I have a way of controlling access to my physical network?

It seems nearly every IoT device seems to have a connection to the internet these days and many devices have a physical RJ45 network connection. Smart TV's for example we find often beacon back to home with potentially sensitive information. It is important to ensure you have some form of policy on the connection of new devices on your network, which may include a risk assessment of what the device has access to and whether it should actually be allowed.

How would I know if physical security measures have been breached?

This is a difficult question to answer, but the best way to test how prepared you are is to ‘red team’ your site, inviting teams of people in to the business to see how much of the business they can access, what information they can get out of the organisation, and how far an unauthorised person can get within your site before you are alerted to their presence. Even beyond these tests, it is important to understand how you could tell if someone is on your site who shouldn’t be, whether it’s by detecting them accessing your IT infrastructure, or physically detecting them walking around.

Technical

Technical controls, whether active or passive can be implemented to enforce, monitor and understand an environment. In modern businesses, the biggest risk if often loss of data or service on its IT systems which means businesses will focus on IT related technical controls such as firewalls to protect the perimeter, IPS/IDS to identify attack, proxy servers to monitor and control internet usage and endpoint protection to prevent the user devices, whether it be loss, attack or intentionally deviating away from the policies.

How many technical controls do I really need?

The quantity of technical controls is vast and the degree of active enforcement is dependant on the risk and the policies of each organisation. How many are deployed largely rests on balancing risk and investment, the best way to approach this is to deploy more than expected initially, before reviewing the deployment and seeing how much value each system is delivering, and working backwards from there.

Which layers of security require technical controls?

Technical controls can be used at all layers of security the network from active preventative controls which stop a detected threat, containment which may identify a threat and quarantine it, detection and reporting to allow for analysis and reporting and recovery and restoration should it be necessary. Network monitoring systems can complement these technical controls by offering passive detection and monitoring of network behaviours. This allows analysts to use this data to better understand the actions of a device or user, using this data to identify risks and proactively mitigate them but also to understand what has happened should an incident occur.

Administrative:

Administrative controls can have a massive effect on the effectiveness of information security strategy, but how effective these controls are varies greatly across organisations based on how they are implemented.

To what extent can administrative controls remove the need for technical controls?

Deploying policies can remove the need for a number of technical controls, however some can be pervasive and enforced using technical measures such as group policy (change password every 30 days) where others are not enforced with technical systems (no system changes during Xmas shutdown)

Do I have a way of understanding when administrative controls aren’t being effective?

Deploying solutions that can understand how many users are not adhering to training, or how many policies are being breached and how regularly can point you towards simple measures such as retraining or policy renewal to improve information security. Network monitoring systems that can tell the user how many people are breaching policy, for example, can inform a system admin that they may need to deploy systems to stop these policy breaches from happening. A good example of this particular issue is monitoring the use of cloud storage solutions that breach policy, if this is happening often, perhaps it’s time to deploy a private cloud storage solution?

This post was originally created for assessors, organisers, and participants of the challenge. If you'd like to be sent an electronic copy containing full size images please contact us

Overview

Roke Manor Research Limited hosted a Cyber Security Challenge event on Friday 7th July 2017 in which a scenario was created for teams of participants to understand the vulnerabilities in a fictional company’s internet-of-things products. In order to understand the events of the day, the Perception Cyber Security team were asked to deploy a Perception sensor to the challenge network to record all network activity that occurred during the challenge, both live as it happened as well as for later analysis.

The purpose of this document is to describe the activities seen by Perception throughout the course of the challenge, as a way of demonstrating the simplicity and coverage of a Perception deployment.

About Perception

System

Perception is a network security tool designed to give an analyst complete visibility of their network and potential threats that they may face. Perception was initially designed by Roke Manor Research Limited (Roke) for the Defence Science and Technology Laboratory (Dstl), part of the UK Ministry of Defence (MOD), in order to detect anomalies on a network. After successfully trialling the prototype systems, Perception was developed into a full product that combines multiple cutting edge technologies with the original anomaly detection system to provide one of the most advanced network security capabilities in the world.

Perception can be broken down into 3 distinct parts:

Data Collection

Using data collection technology initially developed by Roke for Lawful Intercept (LI) purposes for law enforcement agencies, Perception collects and analyses all network traffic at the core of the network at very high speeds. This ensures the system has the best data pool to work from in order to make logical decisions later on. Although an analyst is unlikely to pore over this low level information, this information is available to the user for analysis and incident response activities.

Behavioural Classification

By using Roke's expertise in cyber research for national security agencies worldwide, behavioural classifiers were developed that would understand the context of communications passing over the core of the network. This is done by using a combination of anomaly detection, deep packet inspection, and database querying, rather than a single technology. Looking at traffic behaviourally, rather than using signatures of known threats, is useful because it allows the system to identify threats without any prior knowledge of how they work. The user is able to see a complete list of behaviours on the network in order to understand what may be threat like, indicative of misconfigurations, or indicative of vulnerabilities.

Artificial Intelligence

The final part of Perception is an Artificial Intelligence (AI) that constantly looks for correlations between the behaviours being stored on the system. This AI is constantly being updated to mimic the activities of an analyst, in order to automatically and immediately identify links between multiple behaviours in order to detect vulnerabilities and threats. This AI vastly reduces the time burden on analysts who would normally have to manually find linked behaviours, and allows Perception to alert with a very high detection rate and an incredibly low false alarm rate.

By combining these key technologies, Perception can rapidly draw a user's attention to indicators of threats, compromise, and vulnerabilities so that network security issues can be addressed before they become a serious problem. The behavioural nature of the system allows Perception to detect zero-day threats without any prior knowledge of the malware, as well as detecting user error or malicious user behaviour that provide significant detection problems for firewalls and antivirus systems. The ability for an analyst to identify misconfigurations or vulnerabilities represents a general theme within the network security industry to move towards a more proactive approach to the problem of protecting networks, closing vulnerabilities before they are exploited by an attacker, rather than just responding to threats as they happen.

Perception and the Cyber Security Challenge

The Challenge itself is one of many events set up by the Cyber Security Challenge UK organisation (www.cybersecuritychallenge.org.uk/), a not-for-profit organisation with the aim of bolstering the national pool of cyber skills. As sponsors of the event, Roke agreed to host this particular event, testing 42 participants from around the UK. Challengers were selected from a larger group of applicants who successfully completed some pre-event challenges, and none were currently working in the network security industry.

The Roke organisers of the Cyber Security Challenge Face to Face (F2F) event contacted the Perception team to discuss the use of Perception as a solution to be the “all seeing eye”, overseeing the challenge as it took place. With the high volume of hacking activities taking place on the day, it was vital for the assessors to have a tool that could quickly identify and hone in on participants actions. The assessors were tasked with ensuring the rules of engagement were adhered to and the claimed courses of action could be validated, and Perception was used to carry out this task.

The Perception team were keen to exercise Perception on a network with such a high volume of potentially malicious activity. They were also interested in better understanding which behaviours would be triggered where Internet of Things (IoT) devices were deployed. Based on the brief provided by the Cyber Security Challenge team, the Perception team’s main objective was to be able to alert the assessors to any rule breaking as it happened and therefore demonstrate Perception's ability to proactively detect. In addition to that the Perception team sought to provide detailed post-analysis of events carried out during the day, to provide the assessors with the necessary evidence to back up claims made by the participants.

Differences from a Real-World Deployment

The Cyber Security Challenge was an unusual deployment for Perception, which is typically deployed within the networks of commercial organisations. Whereas normally Perception would be deployed to the core of a network with multiple normal users carrying out their normal day to day business, in this scenario it was deployed on a tiny network which hosted a large number of hackers, a number of IoT devices, no normal user traffic, and active malware. Although there is value in noting the difference between this scenario and a standard Perception deployment, there are salient commonalities and threat scenarios that are present in both Perception's natural habitat on a network for a commercial network, and the Cyber Security Challenge F2F's infected, hacker-dense, IoT-focussed network.

Firstly, the activity of a large number of hackers allowed Perception to prove that it would handle detecting all of the activities of the attackers and active malware, rather than a single attacker or piece of malware. Although Perception is well-suited to networks of all sizes, it is very seldom deployed on networks with the presence of multiple malicious actors simultaneously and this gave it the opportunity to demonstrate that even in extreme circumstances Perception could still handle the accurate detection of multiple threat sources. In real networks there have been instances of an attacker infecting a high number of devices simultaneously in order to cause maximum damage or to hide their true intentions, and it is important to the Perception team as designers of network security systems to demonstrate they have a tool that can handle these types of scenarios.

Secondly, IoT devices are usually thought of as being used in homes, rather than businesses. This is only partly true, a huge number of businesses deploy network attached devices such as smart TVs, IP cameras, and access control systems in their offices. The management of these devices is usually seen as the responsibility of a facilities department within a business, which typically means they aren't subject to the same security and software update controls that would be enforced by an IT team. Even amongst Perception’s current customers it has detected the use of IoT devices running old software that could be vulnerable, and as IoT devices become more and more mainstream, this is only going to become more of an issue. It is a very common occurrence in today’s security landscape for an IoT device to be a first point of infection within a network due to their poor design, relative lack of security updates, and the inability to install anti-virus software on them. The Cyber Security Challenge F2F event is a perfect opportunity to show Perception can identify these types of threats where other protections aren’t suitable.

The logistics of running a challenge of this type also raised some minor differences, for example the networks themselves were not connected to the internet, providing a safe environment for the event. Participants were only allowed to use the provided Internet laptop for research on a separate Internet connected network outside of the challenge network.

About the Cyber Security Challenge Face to Face Event

Setup

The Cyber Security Challenge F2F event was a day-long event based around a smart home. A fictional IoT device manufacturer, EKOR, had heard reports that some of its devices were less secure than initially thought. During the course of the day somebody malicious would exploit a home network and was going to use these exploits to physically break into the home by hacking a smart lock on a front door. The attacker would achieve this by exploiting a vulnerable server and then using a separate vulnerability in the update mechanism to deploy malware to the IoT devices in the victim’s home.

7 teams of 6 participants each were ‘hired’ by EKOR to try and find system vulnerabilities and give feedback to EKOR of what they should do to solve the issues, preferably before the attacker gains access to the home. The participants were briefed that EKOR suspected there were vulnerabilities in their products, but had no information on what activity was to happen on the day. Their activity would include looking at the EKOR network and how the smart devices worked in order to gain an understanding of what the vulnerabilities may be. The teams were against the clock to get the information to EKOR as there was a set time for when the attacker was going to break into the home.

Perception Deployment on the Cyber Security Challenge Network

Each of the 7 teams had 6 laptops to work on (one for each participant) and a scaled down version of EKOR’s smart home products, a hub, a light, a door lock, and a camera. All of these devices were connected to a switch specific to that team. Finally, the teams were given an internet connected laptop which was separate from their switch so they could look up anything they needed to. The seven team switches then fed into an 8th ‘core’ switch.

Simulated EKOR internal servers and other simulated external servers were also connected to each team switch to give the illusion of a real world network, as well as to facilitate the activity planned for the day. This gave the teams a realistic environment to work with while ensuring isolation of all the teams. Other than the separate internet connected laptops, the challenge was conducted on a standalone network with no connection to the Internet. Any IP addresses/domain names/etc. used for the ‘external’ devices are purely fictional.

The Perception sensor took a SPAN feed from the core switch, meaning it could monitor all activity on the network. The Perception sensor then used a virtual private network (VPN) to communicate with the Perception Central Correlation Server (CCS) which aggregated behaviours and displayed them in the UI for the Perception team to view. The CCS can be hosted locally or remotely, however in the interest of keeping the challenge network as simple as possible, it was decided to deploy remotely and communicate via a VPN in this instance.

A live stream of the Perception UI was in the lobby of the event location alongside the assessors, this allowed rapid communication between the Perception team and the assessors about breaches of the event’s rules and the teams’ progress during the day.

As it Happened

Morning

Stage 1

During the first stage, teams were asked to use provided tools and documentation to gain an understanding of the EKOR network. They needed to request access to certain compressed (.zip) files and packet capture (.pcap) files which gave vital information about their network. Using these files they should have gained a good understanding of the devices as well as how the network behaves. The packet captures provided were designed to give the teams an indication of which servers might be vulnerable.

Perception Analysis

During this stage Perception discovered data being transferred from EKOR servers to team’s devices as the teams downloaded the packet captures and the .zip files. By analysing this data the Perception team could see which teams were further ahead and which teams needed further guidance. The judges also used this information to understand who had broken rules of engagement by downloading information prior to being granted permission. Data was being transferred from EKOR servers to the teams via an unencrypted service, HTTP over port 80. Since Perception captures a sample of the packets passing across the sensor it was possible for the analyst to view the actual file details and confirm their contents.

Figure 1: This screenshot of Perception’s UI shows .zip files and packet captures being downloaded by one of the teams

1) These micro-controls in the header provide a quick reference to the key metrics for the event such as source and destination of the data transfer, the number of sessions over which the transfer was made and the data volumes in both directions. The button on the far right downloads the actual packet captures so they can be viewed in a packet analysis tool such as Wireshark.

2) This Data Transfer diagram shows the direction of the connection, the service used for the transfer (HTTP port 80) and the number of sessions used between the source machine (left green box) and the destination machine (right green box). The larger orange bar shows the high volume of data downloaded relative to the low upload volume indicated by the thin blue bar.

3&4) These bar charts show the volumes and duration metrics of the transfer. These charts are particularly useful when analysing data transfers over multiple sessions.

Stage 2

Teams were then given disk images that they could run, only some of which had been infected with malware. This should have given the teams some idea as to what the malware does as it becomes active on the network. The teams were then allowed access to EKOR’s software code base, allowing for manual code review to look for vulnerabilities in the systems. The malware would connect out to an external Command and Control (CnC) server to receive instructions on what to do. Over a half-hour period the malware began to turn the lights of each team’s scaled-down EKOR products on and off. This should have indicated to the teams that the malware was present on the network as well as indicating which devices were infected.

Perception Analysis

Once the malware became active on the network Perception saw connections to the CnC server. This allowed the Perception team to get an understanding as to which devices were infected. On a real world network this information would be an indicator of compromise, enabling the analyst to gain an understanding of which other devices were connecting out to the malicious server, and therefore which devices had been infected.

Figure 2: This screenshot of Perception’s UI shows a behaviour that indicates an internal device has connected to an external device, in this case a compromised device connecting to the malicious CnC server.

1) From these micro-controls in the top bar it is easy to identify the source and destination IP addresses, device hostnames, the service (port) being communicated with, and the number of other hosts talking to that same service.

2) This network diagram shows a source host (black circle) on the internal (trusted) network communicating with a destination host (red dot) on the external (untrusted) network. This diagram also shows a number of other hosts on the internal network, (green dots) also communicating with this external device. This is useful for quickly identifying which other devices have connected to this external host.

3) This summary information here identifies the key attributes for the main communication between the internal and external hosts, namely the IP addresses, hostnames, number of sessions and number of other hosts connected to the same destination.

Afternoon

Stage 3

The first task of the afternoon was to begin penetration testing. Penetration testing is the name given to the process of actively testing devices for potential vulnerabilities. Teams were supplied with rules of engagement and were expected to ask for permission before actively communicating with the devices under assessment. This is typical in a penetration test to ensure there is no unwanted impact on service. Permission was granted, providing a narrow subnet of 10.31.0.0/26 to test against. This stage consisted of a lot of information gathering from the teams using techniques such as port scans to identify active systems within that subnet and also what services they may be running.

Perception Analysis

From this stage Perception reported, somewhat unsurprisingly, a large number of scanning activities within the specified subnet. In addition to this Perception was able to spot teams scanning wider than the subnet specified by EKOR, along with any teams scanning before EKOR had given permission for the penetration testing to begin.

Figure 3: This screenshot from the Perception UI shows a behaviour that indicates a port scan has occurred.

1) These micro-controls in the top bar show the key information about this behaviour, the source, number of destinations – separated by the defined network range, and data volumes.

2) This network diagram shows an internal host (black circle) communicating with one other internal host (green dot) on 503 unique sockets (number on the green line) using over 500 ports (number in the green dot).

3) This summary information shows the overall number of sessions generated by this host is 999, all of which were reset by the server.

4) This details table shows each scanned port as a separate row as the source device cycles through the available port range on the destination. The analyst can easily use this table to identify any ports that elicit a response by sorting the table by TCP Flags B>A.

Stage 4

EKOR released more packet captures to participants that displayed activity from the malware allowing the teams to gain more information about the malware and the CnC server. Some of the teams may have already been aware that there were more systems in the upper parts of the subnet range from looking at some captures. Teams were expected at this point to request authorisation to start penetration testing on the wider subnet having discovered that there may be a vulnerable server outside of the allowed scanned subnet. Once requested, EKOR gave permission to scan the 10.31.0.0/25 subnet. If teams had not found these vulnerable devices then EKOR eventually requested for a wider subnet to be penetration tested. On the wider subnet there was a legacy server that had not been disconnected from the network when a replacement server had been commissioned. The legacy server contained some vulnerabilities that allowed it to be exploited, allowing an attacker to steal the password database for offline brute forcing. EKOR had used the same administrator password on both servers so by gaining access to the legacy server, the attacker could use information learned to access the new server.

Perception Analysis

Perception observed the data transfer of the .pcap files being downloaded from EKOR’s file share at the point these were released by EKOR. Perception then raised similar events to the last stage indicating scanning activities but this time on the wider subnet. These events were used to verify with the judges if teams had prior permission to run the scans on the wider subnet.

End of the Challenge

The last stage of the challenge was for the participants to verbally present their findings to EKOR. These would have included information about the vulnerable server, the malware deployed, and urgent remediation activity required to solve the issue.

Rule-breaking

Throughout the day monitoring by Perception was taking place to ensure that all teams followed the rules and also to help with the scoring of teams. Examples of some behaviours spotted included port scans before permission was given to the teams, scanning of systems that did not belong to EKOR, and not asking for passwords for downloaded files.

- Each team was required to gain permission from EKOR before scanning any device on the network. During the morning it was not expected for any team to be scanning the network, their aim was to gain information from the documentation provided by EKOR. Throughout the day there were a lot of scanning activities taking place that were captured by Perception. This allowed for checking that the teams generating these behaviours had asked for relevant permissions. Some teams had asked EKOR’s permission to run some scans but were only given permission to a small subnet of IP addresses. This meant Perception saw two types of rule breaking, scanning without permission and also scanning a wider subnet than permitted.

- Teams were allowed to download documents from the EKOR file server to help them throughout the task, however these documents were password protected and access to them required asking EKOR for the passwords. This allowed the Perception team to check with the judges whether teams had asked for the passwords once they had downloaded the files. If these passwords had not been requested it may be assumed the teams used different means to open the document and thus broke the rules of engagement.

- Some teams also began to scan a domain name service (DNS) server that did not belong to EKOR. This would have broken the penetration testing rules of engagement. Perception raised this event which was then forwarded to the judges giving them valuable information that they may not have had access to otherwise. The participants of this event were not working full time in network security roles at the time, and perhaps would not have been used to the stringent rules network security professionals are subject to in the real world. Actively trying to detect vulnerabilities in devices where the owner has not granted permission (such as this DNS server) is an offence under the Computer Misuse Act.

Figure 4: This screenshot from the Perception UI shows a behaviour that was generated when a team started port scanning an external DNS server

1) This behaviour is almost identical to the other port scan shown in Figure 3, however the IP address here shows that this port scan was carried out on a device outside of EKOR’s network range.

Actions of malicious third parties on the network

Behaviours from third party hosts were identified by Perception early on in the task. One of the behaviours included CnC connections from the infected IoT devices once the malware became active on the network. Perception raised events which indicated which IoT devices connected to the third party CnC server. These events from Perception showed that at least one device from each team connected out to the CnC server. If the participants had access to a Perception device they would have been able to verify instantly which devices were connecting to the CnC server and therefore understand which devices were compromised, substantially reducing the time taken to investigate the problem. Likewise, if Perception was used as a vulnerability detector by EKOR, it is unlikely these issues would have been open for very long at all since Perception is designed to draw attention to vulnerabilities prior to them being breached.

Conclusion

The Cyber Security Challenge as a whole was a huge success. The organisers were pleasantly surprised by the outstanding capabilities of the participants and the event as a whole represents a bright future for network security professionals within the UK. The format of the event, one based around the growing threat of IoT devices was a welcome change to similar events held in the past and tested aspects of the participant’s capabilities that perhaps haven’t been scrutinised before. Although some rules were broken along the way, the event gave the participants an opportunity to make these sorts of mistakes in a ‘safe’ environment while they hone their skills as security professionals.

Teams were scored accurately based on good behaviours shown and marked down where necessary when rules of engagement were broken. Perception assisted the judges in making these decisions by providing them with definitive proof of an activity that occurred and identifying the teams involved. In cases where teams denied any rule breaking, Perception was able to provide a record, often in the form of actual packets collected from the network, showing that they had done. Perception observed a huge amount of behaviours on the network and correlated these behaviours into an actionable format to ensure the users of Perception could work efficiently. Perception performed very well in this environment due to its ability to begin identifying behaviours and generating alerts almost immediately with little or no configuration, this was vital given the short duration of the event.

Alex Collins, who helped organise the event for Roke, commented, “Perception provided excellent network visibility throughout Roke’s Cyber Security Challenge. Perception discovered the malware on the compromised devices and enabled us to quickly detect, investigate, and understand the activities of contestants throughout the day, as they tried to assess the security of our fictional Internet of Things product line and services.“

The Perception team would like to extend their sincerest thanks to the Cyber Security Challenge for the event itself and the provision of assessors, as well as to Roke Manor Research Limited for putting the F2F event together, we know how much of an epic task this was. They’d also like to give their utmost congratulations to all participants that took part in the event, their skillset was truly incredible, even more so given a relative lack of experience in the field, and it was an absolute pleasure to spy on you all for the day.

In May we wrote a simple explanation of the WannaCrypt malware, and part of that article described how the self-replicating worm that made the malware so prolific was developed by the US NSA for national security purposes. This act of creating malware as a weapon to be used by governments raises some significant security issues that need to be looked at closely, especially given the backdrop of national security.

Weaponised Malware

What's the big problem?

What?

Weaponised Malware refers to the creation of malicious network security tools used to attack network assets. Many security researchers consider 'weaponised' a misnomer, even the NSA itself stated that the exploits created were purely for surveillance purposes. Although this may very well be true, the fact that malicious technology was created and could theoretically be used in an attack scenario can be considered more than enough justification for the term weaponised.

Why?

The question of why a government needs weaponised malware is one that anyone outside of national security services is unlikely to be able to answer. Without understanding the risks faced by a given agencies, we cannot properly judge the size of the countermeasures used against them. Many groups have, however, assumed the exact nature of these threats; but as cyber security professionals and not socio-political experts we are not in a position to discuss these aspects in an informed way. Suffice to say that agencies such as the US NSA consider the development and deployment of weaponised malware a valuable asset in their armoury to defeat these threats.

So What?

Asking why we should care if this is going on is a perfectly reasonable question, and one that was answered quite poetically by the WannaCrypt attacks in May. That attack, one that caused millions of pounds worth of damage worldwide and potentially put thousands of lives at risk in NHS hospitals in the UK, was a result of malware developed by the NSA. Weaponised malware is something we should all care deeply about, and it's effects are only going to get more damaging as we move to a more and more connected world. If governments are to develop weapons of any type, should they be deployed if they could potentially cause damage on this scale?

What Needs to Change?

Judging by the sheer amount of exploits leaked last year by the ShadowBrokers, a lot needs to change. Weaponised malware should be thought of just like any other weapon, kept under lock and key and only to be used by those authorised to do so. The unique nature of software weapons, however, means that this problem is infinitely more difficult than with any other type of weapon. Software can be stolen without removing the original, and it can all be done remotely. Governments cannot simply hide weaponised malware like they do with other weapons, it is not a nuclear weapon that can be placed on a submarine, or a rifle locked in an armoury. When considering how they secure any weapons, how easy it is to steal needs to come into the security process. Do they need to invest as much into network security of weaponised malware as they invest into the secrecy surrounding locations of nuclear weapons?

The NSA got off easy when the ShadowBrokers leaked the malware. Under no circumstances can cutting edge weaponry fall into public hands, regardless of whether it's software or not. The scale of this leak was never fully considered until the WannaCrypt attacks occurred, and even then, if a more capable attacker had wanted to cause real damage, it could have been a lot worse.

The more contentious suggestion of what needs to change is whether the developers of weaponised malware inform the creators of target systems of the vulnerability. That is to say, if a government agency discovers a flaw in a piece of software that allows them to attack an enemy, do they tell the manufacturer of that software immediately? On one hand this would essentially sterilise the malware (for example when Microsoft released patches for the versions of Windows affected by WannaCrypt) but on the other hand, networks around the world are more secure by having fixes rolled out via updates. This is the dilemma that national security agencies worldwide need to consider very closely, and it is the real question at the heart of this problem. There will be instances where they fall on one side of the fence, and instances where they fall on the other, and both situations will have valid arguments for and against. In the end all we can conclude is that this issue is one with a very large moral grey area at the heart of it, and an issue that will not be going away any time soon.

Before we start, Microsoft have released an emergency patch for unsupported versions of Windows (XP, 2003, Vista, 2008) here and in March Microsoft released a patch for supported versions of Windows that stops the exploit used in the WannaCrypt attacks, details here

WannaCrypt

Everything you need to know

WannaCrypt (aka WannaCrypt0r, WannaCry, Wcry) is a type of ransomware that proliferated very rapidly, with reports that it had affected several high-profile organisations as of 12th May. Put simply, ransomware is an attack that encrypts files on a machine so they can’t be used, then demands a ransom be paid for them to be decrypted. These types of attacks are common, but this month’s attacks in particular are noteworthy for a number of reasons.

Typically ransomware is what’s known as a Trojan, delivered via email, requiring hundreds of thousands (or potentially millions) of malicious phishing emails to be sent with attachments or links, and affecting those unfortunate enough to open the attachment or link. WannaCrypt had an additional capability, a self-replicating payload (known as a worm) that meant that once it was in a network, it was able to propagate to other machines on that network. In action, this meant that it only took one person in a business to be affected before everyone in that business was also affected. The worm also has the ability to self-replicate to other networks via the internet, depending on that network’s configuration.

There are multiple conflicting reports on whether WannaCrypt was delivered via email or another method, however, the large impact on businesses was largely caused by the self-propagating addition to the ransomware since several machines could be taken out of action if only one machine was initially infected.

The self-propagating fragment of the ransomware uses a vulnerability that was discovered by the US National Security Agency who also developed an associated exploit. We do not know how long they knew about the vulnerability, but unlike security researchers the NSA tend to keep newly discovered exploits to themselves in order to use them for intelligence activities. The particular exploit used by WannaCrypt was used internally as part of a toolkit codenamed ‘EternalBlue’. Last year the NSA themselves were hacked by a group called the ShadowBrokers, who released details of EternalBlue to the public in April, which is why we are now seeing malicious attacks using the same methods.

WannaCrypt can affect all unpatched versions of Windows from XP to Windows 8. Microsoft had patched the vulnerabilities exposed by EternalBlue in March before the exploit was publically released by ShadowBrokers and in the wake of the attack Microsoft released patches for unsupported versions of Windows (this is rare for Microsoft to patch older versions of Windows, but they did so due to the large scale impact of the WannaCrypt attacks).

Multiple organisations were affected by the attack, however it is not yet known (and unlikely we’ll ever know) if these were targeted directly or just randomly happened to be affected. These include Telefonica in Spain, Fedex in the US and the NHS in the UK to name but a few. Remediation and disaster recovery strategies were put in place in affected businesses, such as turning off all IT equipment and rolling back to pre-attack backups, actions which were hugely costly to those affected and may result in a loss of data in the organisation that may not be identified immediately.

As WannaCrypt started to spread uncontrollably, cyber security researchers started digging into the malware to see how it worked. One of these researchers, MalwareTech, noticed that WannaCrypt contacts an external website before activating on a victim machine, however, when they looked to see who owned this domain it was unregistered. They thought it would be useful to register this domain so they could understand how many connections it was receiving and consequently be able to estimate how many machines were being affected by WannaCrypt. In an odd turn of events, WannaCrypt stops running if the domain has been registered when the malware starts running, therefore stopping the malware activating on internet-connected devices that were subsequently hit by it. There’s many reasons for putting this ‘killswitch’ mechanism in malware, the leading theory is that it’s a way of understanding if the machine it’s affecting is being used in a test environment. Since these test environments seldom have internet connections for security reasons, the malware is able to hide from the tests by not activating if there’s no external internet connection. By registering this domain MalwareTech may have vastly reduced the infection rate of the initial version of the malware.

That’s not likely the end of the story for WannaCrypt, in the weeks since the initial infections were identified, variations with alternative killswitches have been created, and there’s even some variations with the killswitch removed entirely. In essence, WannaCrypt is a combination of two attacks, Ransomware and a self-replicating worm; both of these attacks will continue to be produced by malicious actors.

So what can we do to stop these types of attacks going forward? It goes without saying that good security procedures need to be adhered to, keep updating software as soon as possible and make sure not to open links or attachments we weren't expecting to receive. From a business perspective the same advice applies but in situations where older software must be used, for example to control systems that have lifespans of several decades, a method must be in place to identify these vulnerabilities and put protections in place to stop them being attacked. Tools such as Perception can identify vulnerabilities on a network before they are attacked, giving businesses the chance to protect themselves where software updates aren’t possible. If the worst does happen, these types of network monitoring tools can alert an analyst to exactly which files have been encrypted, and which hosts have been affected, assisting greatly in remediation activities.

The number of companies in the UK investing in Cyber Insurance cover is rising fast, and is rapidly becoming a necessity for any business. As these policies become more popular, they are also under more and more scrutiny, with not only the number of claims increasing but also the number of disputed or denied pay-outs. With the scope of cyber security being so broad and often misunderstood, underwriters of policies are often working with far less information when valuing premiums compared to other types of insurance policy such as motor or health plans.

So how are these premiums calculated? Currently there are two ways, one based on a percentage of total revenue (the easy way), and the other based on the perceived risk to the business (the not-quite-as-easy way). However, with the latter only taking into account assumed reputational harm and immediate financial implications rather than quantifying actual likelihood of breach, there is little impetus for businesses today to actually improve network security in order to reduce premiums. This is the equivalent to a dangerous driver investing in more comprehensive cover rather than improving their driving, or a heavy smoker buying more health insurance instead of stopping smoking.

The situation is improving though, underwriters are now taking more steps to understand how businesses are approaching network security, to offer better value to securer networks. With such a major step change occurring in the fastest growing insurance sector, how can companies prepare for the increase in scrutiny?

Improving Basic Cyber Security Policy

The first point is probably the most obvious, and many insurers already insist on basic levels of cyber policies being in place. There are multiple guides on how to build these policies, but the basic steps always remain the same. What data needs to be protected at all costs (customer info/valuable IP)? Who can access this and other sensitive data? How are confidential communications and data movements protected? It’s always good to think beyond the mandatory as well, whilst building a cyber policy to the lowest common denominator is the most cost efficient in the short term, it might not be sufficient to your business. Furthermore, the policy needs regular review, the cyber landscape is vastly different today than it was even 1 year ago, so how those risks are approached needs to change too.

Enforcing the Policy

Creating a document to manage cyber risk is all well and good, but it’s all for nothing if that policy is not upheld. The biggest problem most businesses have is knowing when policy has been breached, what is to stop someone with access to sensitive data sending it unencrypted across parts of an unprotected or uncontrolled network? Often, network users will find the easiest method to do their jobs, rather than the most secure, and this results in unforeseen breaches of cyber policy. The best course of action here is to make sure system administrators have visibility of what occurs on a network, and are properly incentivised to investigate anything they find suspicious. Regular testing of a network can also be invaluable in understanding where vulnerabilities lie, and best of all this can be done by internal resource rather than forking out for expensive pen testers.

Training the Users

Often seen as the most vulnerable part of a network, the users themselves need to be trained on how to work according to network security basics. Helping users to understand not just what to do but also why they need to do it can vastly improve how secure the network is as a whole. For example, telling users why USB sticks cannot be used will improve adherence to a no-USB policy. Likewise, training users on why Dropbox should be avoided instead of just a blanket block on Dropbox IPs will likely stop the inevitable workarounds the users will try to find. Basic cyber awareness training can also be cheap and effective, making sure users are aware of phishing emails can radically reduce exposure to ransomware, and will protect them in their personal lives too.

Understanding the Risk

Without understanding how a compromise might occur, you cannot properly protect yourself against them. Things that are often missed when building this picture include uncontrolled parts of a network, should we be responsible if AWS or Office cloud services are breached? What steps can be taken to ensure this data stored outside of the business remains secure? Understanding how the network is accessed externally is also useful for getting a good balance between usability of network assets externally and protection of those same assets from external actors.

Will this Actually Save Money?

Yes. Even going through the above steps on an occasional basis will put a business streets ahead of the average enterprise network. When considering that the insurance market is mostly about keeping premiums cheap for those above the average in the bell curve, massive saving can be made as more and more focus is put onto how data is protected rather than what data is being held.

A suite of behavioural classifiers have been developed for the Perception sensors to detect suspicious activity based on the information gathered by the Network Drive Activity Cache. These classifiers monitor behaviours such as file access, modification, upload, download and report on potential policy breaches and/or unusual activity.

The ability to attribute user network based activity to specific windows file sharing operations. This allows for enhanced detection of Ransomware during the Ransomware payload execution.

Additionally, policy-based classifiers can assist in ensuring that you company processes are being followed, for example search patterns can be setup to look for certain filenames, users or extensions of interest that have been seen being used within your network.

So as we said last week, we’ve implemented a Network Drive Activity Cache and naturally, because we have a behavioural engine, we can now identify behaviours based on the information in that cache. We’ve put together a number of behavioural classifiers already based on some real world threats we’ve seen in the wild, but expect more of these classifiers to be implemented over time as we discover more vulnerabilities and scenarios we want to alert on.

One of the things Perception customers love the most isn’t just its ability to pick up on malicious activity, but its ability to discover network vulnerabilities before they are exploited by a malicious actor. Again, these classifiers can be used to discover poor network security practice by discovering users storing confidential information in unencrypted files, it’s the little things like that make Perception so useful.

This update is CCS and sensor based, and will be pushed to all managed customers at the pre-agreed upgrade time. Self-monitored customers can update their own sensors and CCSs using the software upgrade process. Please be aware, this feature requires the Network Drive Activity Cache to be active. If you have any further questions about this upgrade please contact us at info@perceptioncybersecurity.com

A new mechanism has been developed on Perception sensors to allow file sharing activity between client machines and windows network drives to be stored.

Enhanced visibility of network drive access provides the Perception classifiers with a huge amount of insight into a client machine’s behaviour. This in turn allows classifiers to detect potential threat behaviours such as accessing and downloading large parts of a network share or repeated download/upload activities that can often be indicative of malicious behaviour.

This feature also facilitates the inclusion of additional associated meta-data in the events generated by the system such as the names and locations of the files accessed which can be vital in cases where data exfiltration has taken place.

The Network Drive Activity Cache gives Perception an extra level of information on top of all of the existing meta-data it has. When files are transferred from or to Windows-based machines on a network, information about that transfer moves across the network. Perception now includes this information in any behaviours that identify file movement across a network. As a result, any behaviours that saw data movement can now also tell which files were accessed, and whether they were read or written.

Our analysts are already seeing great benefit from this feature, as it immediately identifies which files have been accessed in data movement events, so investigating suspicious events is far faster. Rather than having to trawl through capture files looking for which data has been accessed, the file information is right there, front and centre.

This information provided by this feature enables a number of additional capabilities, the first set of which we’ll tell you about next week. The system can also now build intelligence around who accesses which files, when, and how unusual this is for that person. How we utilise the Network Drive Activity Cache will become more and more complex and beneficial as the system continues to improve, but it’s already showing great results.

This update is CCS and sensor based, and will be pushed to all managed customers at the pre-agreed upgrade time. Self-monitored customers can update their own sensors and CCSs using the software upgrade process. Please be aware, this feature may change the performance requirement of the sensor, and can therefore be turned on or off as required. If you have any further questions about this upgrade please contact us at info@perceptioncybersecurity.com

The BBC consumer advice show, “Watchdog” found hundreds of examples of customers being billed for food that they didn’t order via the restaurant delivery app Deliveroo, forcing the foodies-favourite business to deny that is has been targeted by hackers. The company claimed that the fraudulent orders were made using credentials stolen in other attacks, and only worked on customers that used the same email/password combination for their Deliveroo account.

The customers contacted by the programme, which aired on the 23rd November (you can watch it on iPlayer here until the 23rd December if you are in the UK) all had their money refunded, which is good news, but we don’t know how much has had to be forked out in refunds to affected customers. Deliveroo have since denied that any payment information had been taken, and the transactions were made using a one-click style payment process that doesn’t require customers to input their payment information again for every order.

The advice remains that any online accounts should be protected by a unique password. Although this can rapidly become unmanageable, several password managers are available to stop you forgetting unique passwords for that one website you only use once a year and you’re never going to remember. Apple users can use iCloud keychain, although cross-application support is often lacking, and several Perception staff members use and can vouch for 1Password.

The use of stolen credentials raises an interesting issue for businesses online. Deliveroo obviously benefits from a massively streamlined ordering process, however, is this done to the detriment of security? Deliveroo have stated they will ask for verification when orders are made to new addresses, which should help to stop the fraud entirely (although it still leaves doors open to send as much food as possible to a hacked customer's genuine address in the weirdest hacking prank ever). If Deliveroo is able to prove where the passwords were stolen from, should they be able to make a claim against that organisation since it was technically their fault? Should every breached company be forced to immediately contact all customers and let them know a single password is no longer usable on any other sites?

The European Banking Authority plans to regulate two-factor authentication on all orders over €10 in the near future, but already that has many businesses favouring one-click ordering up in arms stating more business will be lost than the savings made on fraud refunds. Perhaps the responsibility of security lies solely with the consumers themselves, those that reuse passwords only having themselves to blame; we can hardly expect businesses to check all new accounts against haveibeenpwned.com and refusing service to those that have been hacked in the past, can we?

Security experts have disclosed 3 vulnerabilities in Samsung Knox, a piece of software deployed on phones to separate personal and professional data for security purposes, according to Wired.

The Israeli security firm Viral Security Group exposed the flaws on a Samsung Galaxy S6 and a Galaxy Note 5, which allowed full control of each device. Considering the purpose of the software is to maintain the security of a business issued handset whilst allowing the flexibility of a personal device, the businesses that deploy this system may be assuming that these devices are safe despite moving between internal and external (protected and unprotected) network connections.

It's important to note that these vulnerabilities have since been patched in a security update, however, before the patch the researchers at Viral Security Group were able to replace legitimate applications with rogue versions, with access to all available permissions, without the user's notice. Many businesses rely on the Knox software to make sure any connection to a business network is made from the "safe zone" of the phone, and once outside of it's protective environment the personal segment of the phone is used. If movement between these two parts of the device's software is breached the protections are essentially useless and the device once again becomes a BYOD-type threat.

The take-away from this all is that you can't assume your security measures are foolproof, once protections are put in place, a significant responsibility still lies with understanding, controlling, and analysing network traffic.

The full white paper describing the flaws is well worth a read if you have time, but first make sure any devices on your network have fully up to date software.