Since the World Wide Web was first commercialised around 1993-1994 information, data and communications have moved from physical to digital for all. It then rapidly became apparent that sensitive information could be accessed on networks within organisations via their internet connections.
IT security then became a concern for all organisations great and small and putting in place network perimeter security and reporting was then enough to keep these breaches at bay.
Today networks are very different again; there is no known perimeter for most with the usage of mobile devices, cloud apps and Virtual machines as well as more traditional network infrastructure. IT security departments struggle to secure their networks as 80 percent of the time it is not known, where they are vulnerable, exactly what devices are present and when and in most cases there is little information on how breaches occur.
In this paper we will examine why these networks are unknown, how criminals gain access to sensitive information and data, how to combat and know breaches, gather threat intelligence and have the right real time analysis on exactly what activity is passes across organisations networks.

Share

Introduction

In the last decade organisations’ networks have changed beyond belief, with cloud infrastructure, advances in mobile devices and virtual machines, understanding your network’s true topology at any point in time can be extremely challenging. Plus, ensuring your entire network is secure is exceptionally difficult with networks now being elastic and hence continuously changing.

For example, with cloud/Virtual Machine users, some Information Technology security teams may have very little visibility on their network, if separate teams are uploading information or utilising applications in the cloud whether public or private, the security team may not be aware that the network edge has changed even if only for 30 minutes.

With these continuous changes to organisations’ networks new security threats and leak paths are opening up opportunities for malicious attacks to breach the organisation’s network, stealing data, valuable information and damaging reputations of both public sector and enterprise organisations.

Organisations are now under threat continuously from Advanced persistent threats (APTs) which may inflict serious harm to any business. These sophisticated, covert attacks enable theft of valuable data, trade secrets, intellectual property, state and military secrets, computer source code, and any other information available which may have a value either financially or not.

One of the best ways to help organisations detect APT activities is to look for large, unexpected information flows of data from internal origination points to other internal computers or to external computers. This could be server to server, server to client or network to network. To know this, organisations need solutions that have real visibility to traffic across the network and the ability to spot suspicious behaviour from normal business in real time.

According to 2015 INFORMATION SECURITY BREACHES SURVEY [1] conducted by

PWC - 69% of large organisations were attacked by an unauthorised outsider in the last year. This percentage is up from 55% a year ago demonstrating the growing trend in cyber-attacks across UK enterprise and government organisations.

HM Government Cyber Security Breach Survey 2016 [1a] also found 65% of all large UK businesses had experienced a breach in the last year, with 25% of those surveyed disclosing they had been breached monthly. The survey also found the most common type of breach with 68% was Malware/spyware/virus followed by 32% of impersonation of the organisation.

The breach landscape, cost and timescales

Breaches may occur in various form, from malware/viruses/spyware to insider theft of data/sensitive information to ransomware, where your network is blocked or encrypted restricted access.

Ponemon Institute [2] released its annual Cost of Data Breach Study: GlobalAnalysis, sponsored by IBM. According to the benchmark study of 350 companies spanning 11 countries, the average consolidated total cost of a data breach is $3.8 million representing a 23 percent increase since 2013. The study also found that the average cost incurred for each lost or stolen record containing sensitive and confidential information increased six percent from a consolidated average of $145 to $154.

Healthcare emerged as the industry with the highest cost per stolen record with the average cost for organisations reaching as high as $363. Additionally, retailers have seen their average cost per stolen record jump dramatically from $105 last year to $165 in this year’s study.

A recent survey by LTM Research [3] found that over 69% of companies had no knowledge that they had been breached. Most companies only became aware of a breach if their suppliers/customers or other associates had noticed anomalies on their networks, or if the perpetrator had posted information or shared stolen assets publicly. Almost half of respondents said there were key impediments to attaining network visibility, so they didn’t know what they’re missing.

Figure 1: Key impediments to network visibility.

According to The Ponemon Cybersecurity Survey 2015 [4] of 844 IT and IT security practitioners in the financial sector across the US and 14 countries within the EMEA region and 675 IT professionals in the same countries within the retail sector, both industries are struggling to cope with today's threat landscape.

Once a data breach has occurred, the average discovery time is 98 days for financial services companies to detect intrusion on their networks and 197 days in retail. Despite these long periods of time, known as "dwell" time, 58 percent of those surveyed who work in finance -- and 71 percent of those in retail -- said they are "not optimistic" about their organisation’s ability to improve these statistics in the next year.

The research also states that on average, 83 percent of financial companies suffer over 50 attacks per month, as do 44 percent of retail firms. Both verticals continue to remain a target due to the valuable data stored by these industries.

The HM Government Cyber Security Report 2016 [1a] states that whilst 51% of UK businesses have put in place some of the 10 cyber security actions recommended,

49% have still to do so. The report also states the most expensive UK breach identified was £3M, with the average breach cost in the UK amongst all business sizes was £36,000. One main worry highlighted by the report in terms of the real analysis of the cost of breaches, is that only 5% of UK businesses continue to track the cost of ongoing or further breaches to their business. With this in mind, the true financial cost of breaches may be difficult to truly measure or calculate. It may not just be the network downtime, cost of unlocking ransomware etc., breaches impact share valuations, customer confidence, supplier confidence and staff morale and productivity.

There is also the cost associated with training staff on new policies and processes to ensure they are educated in new cyber-security procedures; this would help eliminate some breaches but also impacts business productivity when staff are out being trained or temporary staff must be recruited to cover.

With daily breaches occurring across the Globe CISOs and thought leaders now state that it is only a matter of time as to when organisations are breached. It is no longer [just] about deploying prevention technologies but about minimising risk and financial loss when a breach occurs.

With elastic networks, the whole network topology requires monitoring and reindexing constantly. This is to ensure that any VMs are known, any cloud services/apps utilised intermittently are known and monitored. All communications through the Internet should also be monitored for changes, anomalies and the identification of bad actors, leak paths, both to and from the Internet or other potentially unauthorized networks.

CISOs and thought leaders’ strategies are now focusing on the following:

• Agree breach extremely likely and hence deploy solutions which provide locating the breach and blocking the leak path within the shortest time possible

• Limit the damage to both information and reputation

• Understand how the breach occurred

• Limit financial loss

• Train staff in the most cost effective method and ensure IT security policies and procedures are adhered too across the entire organisation.

Unfortunately, the days of deploying perimeter security and running penetration testing once a month have long gone. Enterprises and government departments alike must be ready at all times to limit the effects of a cyber-attack and minimise the cost and damage from data loss of the breach.

The anatomy of a breach

According to the Verizon 2015 Data Breach Investigations Report [5], seventy-eight percent (78%) of breaches initially started with an incident deemed to be “low-difficulty” e.g. one that was relatively simple to implement, identify and remediate.

The root-cause of incidents leading to a breach usually fell into either (or both) of two categories:

b) Human error or negligence by an insider (i.e. an employee, contractor, etc.)

Certainly much attention has been given to email phishing and other social engineering incidents wherein a malicious outsider entices an otherwise trustworthy but unsuspecting employee to click a link or open a file which surreptitiously installs a bit of malware code that compromises their computer. From that point forward, until the malicious actor’s activity (or malware) is detected, they dwell and move about the network to compromise other computer systems and escalate their levels of “privilege” access. All the while the malicious actor is covering their tracks by adulterating, obfuscating or deleting logs while identifying the types and kinds of information that will be valuable to exfiltrate, destroy or hold for ransom Unless the enterprise/organisation has solutions in place the malware is free to move across the network without visibility to IT teams until such time as a vulnerability test is run or the organisation is made aware of the breach by the bad actors/criminals themselves.

This lack of network visibility and knowing the perimeter at any point in time continues to cause major breach opportunities across the globe for both enterprise, government and SMBs.

Much less frequently discussed, but also as prevalent, is the human error that comes from within IT network operations teams themselves. IT teams not only manage large networks, set up access rights for staff and the vast amount of change occurring is fraught with opportunities for misconfigurations and mistakes that can go unresolved for days, weeks or months.

Despite the fact that most IT networking and security teams are highly competent and well trained, dealing with networks of tens or hundreds of thousands, and increasingly millions of endpoints (think IoT) where there are often 1,000 or more network infrastructure changes every month opens the door wide open to mistakes. Along with changes in staff personnel, level of staff education to Information Security usually varying across an organisation the teams are generally under huge pressure.

d) VPN connectivity from an enterprise network segment into public IaaS virtual private cloud instances which are unmanaged silos (e.g. shadow IT) in which virtual networking functions (like packet forwarders or routers) are changing the shape of the enterprise edge.

In all too many cases, threat intelligence is used only offline, manually (via spreadsheets for example) to identify possible bad actor activity impacting the enterprise network. Clearly, this is not scalable and analysis doesn’t occur frequently enough to be meaningful.

Unfortunately, it takes just minutes for a malicious actor to take advantage of any of these errors and gain an undetected foothold.

Current Enterprise Network Visibility

As networks grow in size, limited network visibility is a fundamental root cause of incidents leading to breach. Aside from missing real time capability (versus monthly or weekly scanning) that is becoming essential for capturing mobile, cloud, virtual and software defined infrastructure, the offerings below provide no network segmentation analysis and no breach analysis linking to threat intelligence data.

Common technologies and their additional limitations include:

a) Nmap. Open source, slow performance especially on larger networks, only experimental IPv6 support, can cause network congestion (no rate throttling) and has been known to knock over scanned endpoints with malformed packets. Often used outside of normal business hours due to these limitations, which defeats the purpose of enabling network visibility.

b) IPAM. Network visibility isn’t its primary use case. IP address management, allocation and tracking is. Some of these offering do have a limited, often optional discovery component. These have very limited capability to authoritatively and recursively index a network.

c) Vulnerability Scanning. Like IPAM, network visibility isn’t its primary use case. Identifying critical vulnerabilities on endpoints via credentialed access is. Frequently misses devices on networks that are not known to the tool or unreachable. Very limited capability to authoritatively and recursively index a network results. Similar to Nmap, because the credentialed access is so heavy, these scans are often conducted outside heavy usage hours which again defeats the purpose of gaining full network visibility.

d) Network Simulation. These offerings gain credentialed access to the command line of known routers, packet forwarders, etc. to extract configuration information and then mathematically simulate what the network topology looks like. Great for “what-if” modeling exercises. The problem is that the modeling is only as good as the full extent of L3 forwarding devices, which is often unknown or just plain wrong. Certainly this method can’t study the impact of transitory virtualized networking devices or rogue network infrastructure, to which it doesn’t have credentialed access.

e) Network Management. As in the network simulation case, these tools analyse only what they have access to, in this case via SNMP. Rogue, unmanaged or simply undocumented network elements installed on a network won’t be visible to any network management tools.

To the extent that organisations are depending on some combination of the above approaches to provide network visibility, incidents leading to breach of their enterprise networks will continue to rise.

In an attempt to better operationalize threat intelligence, those feeds can be ingested for action by commercial security prevention infrastructure (i.e. firewall/proxy, IPS, DLP or SIEM), While this level of automation is better than the quite common manual or forensic-only application of threat intelligence, important limitations remain. How does IT security validate that enforcement is working across the whole enterprise and hasn’t been inadvertently turned off or misconfigured by the teams responsible for the operation of various infrastructure equipment? Within minutes, just one newly installed, modified, or upgraded piece of equipment lacking access to or improperly configures to use the threat feed will expose whole the organization.

Best practices in achieving True Network Situational Awareness

With the breach statistics referred to above it is an enormous challenge for IT Security and IT teams to keep track of network changes, strange netflows and movement of data. With the network continuously changing the amount of man hours required for real time or ongoing analysis is generally not affordable or resourced by many organisations. Traditional perimeter security methodologies are now challenged and hence pose risk with cloud, BYOD and mobility programs as they don’t examine the network traffic/behaviour in enough detail. Plus the evolving network architectures of Software-defined networks (SDN) is rapidly increasing network complexity for many organisations.

Mergers of organisations networks along with outsourcing programs by some businesses are also accelerating the speed of change within many enterprises. With technology advancements network change is becoming faster and faster than before.

How can real time situational analysis of your network helps with breach detection and threat intelligence?

Key areas already discussed above demonstrate the lack of insight into most organisations’ entire network. IT teams need further examination into the following areas:

Identify and monitor 100% of the networks connections and devices at any time

Is there any anomalous cybersecurity behaviour on the virtual enterprise, large netflows?

Understand all aspects of the network environment – physical, mobile, virtualized, cloud (private, public and hybrid) along with usual netflow behaviour

Expose potential problems, such as cyber threats, unplanned Internet connections, unmanaged devices and unsecured ports in real time so these maybe shut down and monitored in real-time for instant visibility – and quick response

Know which virtual devices are present in the virtual/cloud infrastructure,

What is the status of shadowserver and are there any concerns?

Are or could users be putting up and/or configuring virtual machines instead of requesting IT support? Could IT identify rapidly if this was occurring – near real time - or could they sit on the enterprise network unnoticed?

How could you identify split tunnelling which would allow leakage between virtual environments, perhaps even between two virtual environments that are completely unrelated?

Methodologies

IT Security Teams can uncover many of the above threats with the following processes:

Occasional scanning or mappingof the network won’t show old connections to cloud or VMs therefore an “Always on” technology will enable the most comprehensive network visibility. Monitoring in a continuous recursive cycle of the network, targeting, indexing, tracing, monitoring, profiling, and displaying the network topology will give the analysis required to show the true entity at any time,

Passive Indexing (listening) for newly connected network infrastructure, devices and previously unmanaged assets on your network will provide insight into possible breach paths. Given that networks are ever changing the type of technology would need to be Agent-less and not impact the network or performance. If possible the passive indexing should favour ARP traffic and the routing plane, which uses route analytics, routing protocols and traffic monitoring (DHCP, etc.)

Active Indexing/crawling your networking (scanning), sending packets to a certain target on the network when a network infrastructure change occurs enables examination of that connection/device etc., The package will provide learning from the target’s response to it and hence active discovery provides insight from data via the passive indexing, yielding the broadest and most comprehensive analysis. This methodology is a benign exploration that’s especially useful in identifying a network’s perimeter.

Device profiling - Targeted System Inquires (or system access) is the close inspection of a known device or entity. Utilising the intelligence gained from active and passive indexing along with targeted system inquires gathers rich data on the network. This in-depth device profiling uses SNMP, and also includes Port Discovery and DNS Lookups.

With the combination of the 3 technologies above organisations can detect newly connected devices and previously unmanaged assets. Changes to the network are automatically detected providing IT Security team with the necessary real time alerts of possible security policy violations and network vulnerabilities. These type of processes will provide documented network changes and threat intelligence for regulatory compliance alleviating the huge amount of analysis and man hours which would be generally required to pull together this in-depth report and analysis.

Should an event occur or anomalous behaviour be identified, having the data and analysis in real time will limit the impact of the breach. Identification of events or configurations linked to adversarial or anomalous activities across the enterprise are key to real time analysis.

Big Data and Advanced Analytics with the enormous growth in data stored by structured and unstructured understanding how this is accessed, moved or managed is fast becoming a complex issue for many larger and enterprise sized companies. Understanding what data is sensitive, important or stale is an unsurmountable task. – Security teams need the ability to allow for the collection, storage and analysis of unstructured data in real-time, plus to drill down and analyse to give the data meaning. Any new external data feeds/streams – such as NetFlow data requires analysis along with Threat Intelligence feeds – to correlate with realtime indexing information which in turn will provide for much deeper breach detection analytics and understanding.

Network Discovery and mapping, discovers all networks and connections. This methodology also finds previously “unknown” portions of the network, defines the perimeter, examines partner connections, and cloud connectivity. Providing an integrated OSI Layer 2 / Layer 3 which understands the network infrastructure of which would have been previously unknown. Key features of Network discovery and mapping include:

• Profiling Devices – Identify the types of devices connected to the enterprise, highlighting those devices that fall outside of your security/compliance policy or could be “rogue” in nature.

• Discover Network Leak Paths – Review connectivity between networks (business units, partners, secure zones, etc.), or indeed from the corporate enterprise network to the Internet. Through this intelligence, IT Security professionals have the right information to determine whether the connectivity is authorised, rogue, or if proper security controls are in place where the connectivity is.

• Steady state/Network Analytical Behaviour – Know and understand baseline activity and normal network behaviour over a short period of time. Having this baseline activity gives you the network’s steady state – that range of behaviour indicating health and normalcy on the network. Once certain parameters have been defined as normal, continuous monitoring and flagging any departure from one or more of them as anomalous provides further insights again highlighting possible vulnerabilities or malicious behaviour.

Figure 2: Real-time network visibility

To sum up:

Real-time Network Architecture Analytics– provides organisations with a true view of what the network really looks like (what devices, connections are attached to the network, and how).in real time. To understand the entire network at any time, organisations need:

An Authoritative Network Census

Port Mapping/Usage

Real-time Network Infrastructure Updates (Broadcast, OSPF, BGP, etc.)

Address Space Validation

Network Edge Definition

Unreachable Network Segment Identification

Device Indexing/Profiling both passive and active

Enterprise-wide Certificate Identification

Network Topology Mapping

Real-time Network Segmentation Analytics

Organisations require now more than ever advanced intelligence to verify their network segmentation and fully understand their network architecture relative to an organisation’s policy. Having tools which enable quickly detecting Leak paths, unauthorised Internet connectivity,

Deploying solutions where the data can be exported to other network security solutions via an open API greatly improves analytical capabilities, allowing organisations to fully understand their IT environment in more context and have greater insight into network traffic, connections and devices.

In the case of financial, retail, entertainment, manufacturing and healthcare organisations IT security and network teams are learning this the hard way.

In recent memory, companies in each of these industries have publicly reported the theft of millions of their customers’ financial records or the exfiltration of sensitive email and internal communications, research and development work-product, intellectual property and trade secrets.

Network Architecture

A top five US bank with more than $1.5 trillion in assets uses Lumeta to examine their network architecture weekly. In their engagement, the scope of their enterprise network – which was based on data from an existing network management IP Address.

Management and Host Vulnerability Assessment (VA) – was initially evaluated to be 600,000 IP addresses. Lumeta’s recursive network indexing technology identified more than 800,000 actual IP addresses in use, a 25% visibility gap. The newly identified sub-networks and devices were not being evaluated by VA making this unknown 25% more susceptible to malware which could be used to exfiltrate financial information.

Network Segmentation

Another top five US bank with more than $2 trillion in assets has hundreds of internal enclaves that need to remain segmented from each other and also from the public Internet as part of an in-depth network security policy. Lumeta’s recursive network indexing technology helped identify more than 25 multi-homed hosts that were packet-forwarding between Ethernet interfaces on those servers violating network segmentation policy required for PCI DSS compliance.

Cybersecurity Analytics

One of the world’s largest and most recognizable entertainment brands with more than 100,000 employees and $40B in annual revenue has a policy prohibiting use of

Secure Shell (SSH), TCP port 22, on certain critical subnetworks. SSH is one method of gaining privileged access to servers which is frequently sought by bad actors to achieve lateral movement and escalate privileges after they’ve gained a malware beachhead on a victim network. This customer used Lumeta IPsonar to initially identify and reconcile/remediate the universe of severs exhibiting SSH access. They subsequently migrated to Lumeta ESI Cyber Threat Probe for real-time alerting upon any SSH port usage within the designated zones.

Conclusions

More than 20% of an enterprise or public sector network is generally unknown.

Network Behaviour Analytics continuously mapping and indexing the entire network for threats is the only methodology which will provide IT Security and IT professionals a true network visibility in real time. Integration with threat intelligence solutions enhances and provides a deeper insight from data gathered by both passive and active indexing.

The consensus of opinion is now no longer about protecting your organisation from a possible breach, deploying perimeter security, it is how fast can we identify and limit the damage from a breach. Real time network visibility underpins the entire enterprises security as no solutions can protect devices/connections on a network if the devices/connections aren’t known to exist.

Lumeta’s network situational awareness platform is the authoritative source for realtime network behaviour analytics and cybersecurity breach detection. Lumeta recursively indexes a network to identify and map every IP connected device, as well as uncover leak paths (network segmentation violations) -- all then correlated against threat intelligence, NetFlow, etc. Headquartered in Somerset, New Jersey, Lumeta has operations and clients throughout the world. More information is available at www.lumeta.com

About the Author

Reggie Best – Chief Marketing OfficerReggie has a technology background with BE and MS degrees in EE and more than25 years of experience in communications, networking and IT security businesses since starting his career at Bell Labs, the R&D arm of AT&T. Reggie has been involved in the founding of three start-up companies which successfully progressed to M&A, including Teleos Communications (sold to Madge Networks), AccessWorks (sold to 3Com) and Netilla Networks (sold to AEP Networks). Most recently, Reggie was President & COO at ProtonMedia where he oversaw the operations and product teams.

Please sign in or register for FREE

Sign in to E&T Cyber Security Hub

Register to E&T Cyber Security Hub

E&T Cyber Security Hub brings together engineers and cyber security specialists to share practical know-how. With content created ‘by engineers, for engineers,’ it provides peer-reviewed technical information, real-world insights, lessons learnt and case studies, as well as tools for networking and knowledge-sharing, profiles of experts and the opportunity for companies to showcase their expertise.