Six months before the incident, Target invested $1.6 million in FireEye technology.

Target had a team of security specialists in Bangalore monitoring the environment.

On Saturday November 30, FireEye identified and alerted on the exfiltration malware. By all accounts this wasn't sophisticated malware; the article states that even Symantec Endpoint Protection detected it.

"The breach could have been stopped there without human intervention." I'm not sure that I agree with this statement. The journalist doesn't mention what FireEye technology was deployed, but I suspect it was the NX platform that focuses on web-based malware. The NX can be deployed: out-of-band via a TAP/SPAN, in-line monitoring, or in-line active blocking. This was a new deployment, so the chances that the solution was deployed in-line for active blocking or leveraging TCP resets for out-of-band blocking are small. The article even states that "It’s possible that FireEye was still viewed with some skepticism by its minders." If they were skeptical of the solution, I doubt it was doing any automated response. In my experience, customers don't typically deploy these types of solutions in-line. FireEye tells me that they have a significant amount of customers who deploy the NX platform in-line, but I personally haven't observed that over the past four years of selling (as a sales engineer) and covering FireEye.

Analysts in Bangalore got the alert and then flagged the security team in Minneapolis.

The Minneapolis Security Operations Center "did nothing."

The article states, "The security system sent out more alerts, each the most urgent on FireEye’s graded scale." More alerts don't always result in action. How many other alerts were SOC analysts getting? Depending on the FireEye deployment, the solution's alerts could've been overwhelming by themselves. Then if you consider the aggregated alerting from Target's entire infrastructure, these alerts could have been lost in the noise.

Hindsight is 20/20. There is no doubt that Target had catastrophic failures. It's easy to look back upon what happened and armchair-quarterback the situation. The pragmatic reality is that technology alone won't magically save your organization. If you don't have the right people/process/oversight around initiatives, they won't be successful.

Vendors who live in glass houses shouldn't throw stones. It didn't take long; I've already started hearing FireEye competitors speaking out against their competitor's role in the Target breach. As I mentioned above, this wasn't a technology failure: FireEye detected the malware. This was a people/process/oversight failure. In some respects, this reminds me of Bit9's "operational oversight" breach. I blogged about this last year and made the comment that Bit9's operational oversight, was an operational reality for most organizations out there. So if you are a FireEye competitor with a similar technology that would be deployed in the same manner, chances are your technology would've suffered from the same operational oversight. Furthermore, as an analyst, I'm not encouraged when I hear competitors demeaning other vendors. I don't want to hear trash talking; you're not wrestlers, and this isn't the WWE. Talk to me about how you differentiate; don't chase your competition, disrupt the entire space with a new novel approach.

Comments

Some people think that it was likely that the Target security team received a large volume of such alerts on a daily basis, which would have made it tough to have singled out that threat as being particularly malicious.

This type of situation is very common in the industry. They just haven’t been hit yet.

The latest published Data Breach Investigations Report from Verizon reported that most breaches were detected by external parties with whom the victim has no business relationship specific to detection services. Common examples are ISPs and intelligence organizations that track threat actors and, when appropriate, inform potential victims of suspicious activity.

Only 13% of breaches where detected by internal means. There is a lack of effective means of detecting a breach internally. This involves a regular employee who, in the course of their daily responsibilities, notices something strange. Other means (like Log reviews, IT audit, Network Intrusion Detection Systems and Fraud detection) only detected 5% of the breaches.

This tells me that we need to proactively secure sensitive data itself and not rely on monitoring systems to catch an attacker.
Chip and pin cards will not help against most modern attacks. Attackers just move to the next point in the data flow to steal your identity data.

I read about retailers that are using best practices in an interesting report from the Aberdeen Group. The report revealed that “Over 12 months, data tokenization users had 50% fewer security-related incidents (e.g., unauthorized access, data loss or data exposure than tokenization non-users”.

I think that the Aberdeen approach can quickly address some of the urgent issues. The name of the study is “Tokenization Gets Traction”.

I also read "Tokenization – Why, What, How and Who" in Money2020 that Tokenization has been a hot topic lately. In a tokenization scheme, even if a hacker has access to several data pairs, the tokenization algorithms should be complex enough so that it cannot be breached.

Ulf, thank you for the comments. I really got my intro to tokenization about 4 years ago when I was a SE working with clients on PCI compliance. It was interesting for clients, but there wasn't a ton of movement to it. At Forrester we include tokenization as one of the options for "killing data."