Analysis of A Data Breach

Who’s looking at your business critical data? Say, your customer accounts, patients, members or other individuals who have entrusted you with social security or credit card numbers? You know, right? Authorized employees with the access required to perform their assigned job tasks, right?

Really? You are sure you have no intruders exfiltrating personally identifiable information (PII) or protected health information (PHI)? How do you know for sure?

Anthem, Target, UC Berkeley, Internal Revenue Service, to name a few, thought they were adequately protected through their combination of software, network, security and process protections. However, in reality, attackers (internal and external) are constantly improving exploiting potential security holes.

Let’s examine a Data Breach or Advance Persistent Threat:

Let’s examine a detailed timeline of the breach. This will help us understand how a Fortune 10 company, one of the largest retail companies in the world, have allowed this breach to happen?

The exposure really began in September, when a 3rd party Mechanical Services company installed a Windows base computer to monitor HVAC operating systems. Attackers had managed to phish credentials enabling access to the network.

In November, using the stolen credentials, they were a able to access the client network installing malware on a target Point of Sale (POS) system. Within a few weeks, the attackers, had tested and validate their software successfully installing exfiltration software enabling the extraction of millions of customer credit card numbers directly from store cash registers.

FireEye alerts were recorded in the Secrutiy Operations Center (SOC) within the Security Information and Event Management (SIEM) software. However, these were one of thousands of alerts received each day. Users didn’t fully trust the alerts, as they had received numerous false positives in the past. There was also no additional activity information to provide context.

Context such as what traffic preceded and followed the event, from and to where. There was no network and business context, which could have answered the question do these alerts indicate the ability to reach critical assets?

There was also no business process for for triaging and analysing alerts to determine if they were truly false.

Between December 2nd and December 12th, more alerts were received from different areas of network which were not correlated with other activity or in the context of the business or network.

Bottom line, there was not enough context or visibility to correlate events, so they were still ignored. Why?

The victim was using a set of “first gen” security products that were not fully integrated with their SIEM platform

The data that indicated they were under attack and being exploited was there; incumbent security products did not detect suspicious activity

Due to large quantity of false positives reported by other security tools, other warning signs were missed

It’s all about adding CONTEXT and detecting anomalous behavior in a vast sea of data…in real time!

How the breach could have been avoided:

Simply attempting to keep intruders outside of the network is not enough. Effective protection comes from the correlation of data providing context informing SOC staff of a threat with enough data to separate false positives from true security events.

We should approach with the attitude we will be breached: our systems and processes should quickly alert us to the breach and enable our ability to quickly shut down the intruder before damage has been done.

Here, the SOC sees that a user account is accessing servers not normally accessed based on prior data flows. At the same time, Guardium Data Activity Monitoring (DAM) detects unusual database activity. Meanwhile, attackers are unable to gain a foothold due to BigFix patching, eliminating operating system and database vulnerabilities, prioritized by QRadar.