In a panel discussion I had at EIC 2014 with Roy Adar, Vice President of Product Management at CyberArk, Roy brought up an interesting number: according to research, attacks start on average 200 days before they are detected. Taking into account the Gaussian distribution behind this average, some attackers might have been active for years before they were detected. And who knows whether all of them are detected at all.

How to react to this? There are several elements in the answer. Protect your systems with various layers of security. Use anti-malware tools, even while they won’t catch every malware and every attacker. Encrypt your sensitive information. Educate your employees. These and other “standard” actions are quite common. But there is at least one other thing you should do: analyze the behavior of users in your network.

I do not mean user tracking in the sense of “do they do their job” (which is hard to implement in countries with strong worker councils), I’m talking about identifying anomalies in their behavior. Attackers are characterized by uncommon behavior. Users might access far more documents than average or than they did before. Accounts might be used at unusual times. Users might log in from suspicious locations. Sometimes, it is not a single incident, but a combination of things, eventually over a longer period of time, which is typical for a specific form of attack, especially in the case of long-running APTs (Advanced Persistent Threats).

There is an increasing number of technologies available to analyze such patterns. Standard SIEM (Security Information and Event Management) tools are one approach, however analysis of anomalies might be difficult to perform based on rules. However, there is an increasing number of solutions that rely on more advanced pattern-matching technologies. These can, based on specific mathematical algorithms, turn log events and other information into patterns (in fact complex matrices), and analyze these for anomalies. There might be some noise in the sense of false negatives in the results, but this is true for rule-based analytics as well. Combination of such analytical technologies can make a lot of sense – if you bring together specialized analytics for areas such as Privilege Management (for instance, CyberArk’s PTA), User Activity Monitoring, pattern-based analytics, and traditional SIEM, you might learn a lot about these anomalies and hence about the attacks that are already running and the attackers behind them.

From our perspective, all this is converging into a new discipline we call Real-Time Security Intelligence (RSI). There is a new report out on that topic. I also recently wrote another post on RSI.

Even while you might feel it being too early to move towards RSI, you should put your focus on how to learn more about the attackers that are already inside your network. Understanding anomalies and patterns with new types of analytical technologies might help.

Adobe warned a few days ago that an internal server with access to its digital certificate code signing infrastructure was hacked. This resulted in at least two malicious files being distributed that were digitally signed with a valid Adobe certificate.

If you take the numbers published by Secunia, a security/patch management software vendor, Adobe ranks pretty high in the list of companies with reported vulnerabilities – especially when taking into account that it is only two core products in the case of Adobe (Adobe Reader and Adobe Flash Player), compared to the broad portfolio of either Oracle or Microsoft. When looking at “genuine vulnerabilities”, Adobe ranks 5th behind Oracle, Apple, Microsoft, and Google. The Secunia analysis also lists the Top 50 software portfolio, with Adobe Flash Player ranking 4th and Adobe Reader ranking 8th. Unfortunately, these are the two programs within the top ten of that list with the highest number of exploited critical vulnerabilities.

Another aspect when looking at Adobe from the security perspective is patch management. In Adobe’s case, this is cumbersome. Furthermore, Adobe has started (with their last patch for the Adobe Flash Player) to install Google Chrome and the Google toolbar without user consent – at least that’s what happened on my system. I had to manually uninstall both components afterwards.

So what we see is a mix of

a massive number of vulnerabilities

a disputable approach on patch management

successful attacks to a critical internal security infrastructure

Does Adobe deal with that situation like customers would expect? You might say “yes” given that expectations might be very low. However, when looking at what we should expect from a professional software vendor, there are massive shortfalls.

Did Adobe inform anyone promptly about the malicious files? No, they didn’t. The issue dates back to early July. Adobe claims that they took immediate internal actions including a clean-room implementation of the code signing infrastructure. Maybe they should have taken actions before, to avoid such attacks or to at least detect it when it happens and not after malicious code appears on the Internet.

I just recently blogged about the security issue in Microsoft Internet Explorer. The Adobe approach to security managementalso falls more obviously in the category of “security by obfuscation”. I don’t think that this is the right way to act, especially in case of a software vendor who provides software that ranks amongst the top ten within the average corporate software portfolio.

Taking all these points, then it is past the time that Adobe should start to act far more professionally in their security management and their patch management. Open and timely information, a simplified patch management methodology, and minimal patches without additional software are the minimum requirements – together with an internal IT security approach that is good enough for today’s “advanced persistent threat” types of attacks.

Yesterday, news about a new trojan have spread. The trojan is called Duqu or, correctly, W32.Duqu. It appears to be based on Stuxnet code, thus it is targeted against industrial automation equipment. However, unlike Stuxnet the new Trojan isn’t targeted to sabotage industrial control systems but steals data. So it is most likely just the precursor to the next Stuxnet-like type of attack. Duqu was, from what we know, targeted against selected organizations mainly in the area of software development for industry automation. It does some espionage there, collecting information which then might be used in the next attack wave. It appears that Duqu deletes itself after 36 days.

Interestingly, Stuxnet used digital certificates which had been “stolen” before. Duqu used other digital certificates which seem to have been directly generated in the name of other companies, bypassing the security of CAs. That relates well with current attacks on CAs, with DigiNotar being the most prominent victim (and now out of the business) and other indicators.

The server in India which has been used by Duqu to provide information back to its creators is now blacklisted by its ISP and thus no longer works. However, chances are that there are more instances of Duqu and Duqu-like trojans either out there or on their way.

Duju proves two assumptions:

Industrial automation increasingly becomes a target of attackers – and Stuxnet was only the first of its type (which has been detected)

Attacks are increasingly sophisticated – APTs aren’t a fairytale, they are real

The consequence is that not only the business IT environments need adequate protection but industrial environments as well – they might even need better protection. And if feasible, technical isolation of these networks is a pretty good idea. No net, no (online) attack. Besides this, there is no reason to assume that you are safe against attacks, whichever precautions you take. Thus it is about being proactive at any stage – preventing attacks, identifying attacks, dealing with attacks.

Some valuable information around that has been provided in a recent KuppingerCole webinar – have a look at the webcast.