Tag Archives: Threat Intelligence

[Editor’s Note: In the article below, Ricardo Dias who is a SANS GCFA gold certified and a seasoned security professional demonstrates the usefulness of Yara – the Swiss Army knife for Incident Responders. This way you can get familiar with this versatile tool and develop more proactive and mature response practices against threats. ~Luis]

Intro

I remember back in 2011 when I’ve first used YARA. I was working as a security analyst on an incident response (IR) team, doing a lot of intrusion detection, forensics and malware analysis. YARA joined the tool set of the team with the purpose to enhance preliminary malware static analysis of portable executable (PE) files. Details from the PE header, imports and strings derived from the analysis resulted in YARA rules and shared within the team. It was considerably faster to check new malware samples against the rule repository when compared to lookup analysis reports. Back then concepts like the kill chain, indicator of compromise (IOC) and threat intelligence where still at its dawn.

In short YARA is an open-source tool capable of searching for strings inside files (1). The tool features a small but powerful command line scanning engine, written in pure C, optimized for speed. The engine is multi-platform, running on Windows, Linux and MacOS X. The tool also features a Python extension providing access to the engine via python scripts. Last but not least the engine is also capable of scanning running processes. YARA rules resemble C code, generally composed of two sections: the strings definition and a, mandatory, boolean expression (condition). Rules can be expressed as shown:

The lexical simplicity of a rule and its boolean logic makes it a perfect IOC. In fact ever since 2011 the number of security vendors supporting YARA rules is increasing, meaning that the tool is no longer limited to the analyst laptop. It is now featured in malware sandboxes, honey-clients, forensic tools and network security appliances (2). Moreover, with the growing security community adopting YARA format to share IOCs, one can easily foresee a wider adoption of the format in the cyber defence arena.

In the meantime YARA became a feature rich scanner, particularly with the integration of modules. In essence modules enable very fine grained scanning while maintaining the rule readability. For example the PE module, specially crafted for handling Windows executable files, one can create a rule that will match a given PE section name. Similarly, the Hash module allows the creation on hashes (i.e. MD5) based on portions of a file, say for example a section of a PE file.

YARA in the incident response team

So how does exactly a tool like YARA integrate in the incident response team? Perhaps the most obvious answer is to develop and use YARA rules when performing malware static analysis, after all this is when the binary file is dissected, disassembled and understood. This gives you the chance to cross-reference the sample with previous analysis, thus saving time in case of a positive match, and creating new rules with the details extracted from the analysis. While there is nothing wrong with this approach, it is still focused on a very specific stage of the incident response. Moreover, if you don’t perform malware analysis you might end up opting to rule out YARA from your tool set.

Lets look at the SPAM analysis use case. If your team analyses suspicious email messages as part of their IR process, there is great chance for you to stumble across documents featuring malicious macros or websites redirecting to exploit kits. A popular tool to analyse suspicious Microsoft Office documents Tools is olevba.py, part of the oletools package (3), it features YARA when parsing OLE embedded objects in order to identify malware campaigns (read more about it here). When dealing with exploit kits, thug (4), a popular low-interaction honey-client that emulates a web browser, also features YARA for exploit kit family identification. In both cases YARA rule interchanging between the IR teams greatly enhances both triage and analysis of SPAM.

Another use case worth mentioning is forensics. Volatility, a popular memory forensics tool, supports YARA scanning (5) in order to pinpoint suspicious artefacts like processes, files, registry keys or mutexes. Traditionally YARA rules created to parse memory file objects benefit from a wider range of observables when compared to a static file rules, which need to deal with packers and cryptors. On the network forensics counterpart, yaraPcap (6), uses YARA for scan network captures (PCAP) files. Like in the SPAM analysis use case, forensic analysts will be in advantage when using YARA rules to leverage the analysis.

Finally, another noteworthy use case is endpoint scanning. That’s right, YARA scanning at the client computer. Since YARA scanning engine is multi-platform, it poses no problems to use Linux developed signatures on a Windows operating system. The only problem one needs to tackle is on how to distribute the scan engine, pull the rules and push the positive matches to a central location. Hipara, a host intrusion prevention system developed in C, is able to perform YARA file based scans and report results back to a central server (7). Another solution would be to develop an executable python script featuring the YARA module along with REST libraries for pull/push operations. The process have been documented, including conceptual code, in the SANS paper “Intelligence-Driven Incident Response with YARA” (read it here). This use case stands as the closing of the circle in IOC development, since it enters the realm of live IR, delivering and important advantage in the identification of advanced threats.

Conclusion

The key point lies in the ability for the IR teams to introduce the procedures for YARA rule creation and use. Tier 1 analysts should be instructed on how to use YARA to enhance incident triage, provide rule feedback, concerning false positives, and fine tuning to Tier 2 analyst. Additionally a repository should be created in order to centralize the rules and ensure the use of up-to-date rules. Last but not least teams should also agree on the rule naming scheme, preferably reflecting the taxonomy used for IR. These are some of the key steps for integrating YARA in the IR process, and to prepare teams for the IOC sharing process.

Last week I had the opportunity to attend SANS DFIR Prague where I completed the SANS FOR578 course “Cyber Threat Intelligence” (CTI) with Robert M. Lee. Robert is one of the co-authors of the course and is brilliant instructor that really knows his stuff. Everything stands or falls with the quality of the instructor and I believe Robert did give us (students) a great learning experience with great interactions and discussions. Among other things Robert is the CEO of the security company Dragos Security and has worked in the US Air Force which allows him to talk genuinely about the “intelligence” topic.

Overall this was a five day course that immerses the student into the new and emerging field of CTI. During the five days we lived, ate and breathed being a CTI analyst. Being a CTI professional is not an easy task and it’s not in five days that you can expect to become one. However, in my opinion, if someone has the desire, as well as the ability, this course can give you the means. I’m sure this course gave me important skills and competencies about this new, emerging field. One key take away from the training is that it gives you the foundations to create a threat Intel capability into your organization and enables security personnel to develop more proactive and mature response practices against threats and move defenses higher up the kill chain.

The first day is a comprehensive introduction to the new Cyber Threat Intelligence (CTI) domain, with a wide range of topics and terminology being covered. What is threat intelligence? Why should organizations adopt it? What is the value? What is the difference between a consumer and a producer of CTI? What are the different types of CTI? In addition, background on the intelligence doctrine and its life cycle is also discussed. The afternoon was spent on the different frameworks and models that you can use to create consistent and repeatable CTI outputs. The Kill Chain, Diamond Model, Courses of Action Matrix and the Detection Maturity Model were the ones most covered.

Day two was all about enforcing the models presented in day one with special focus on the Kill Chain model. Lots of exercises supported by a network intrusion scenario where we (students) needed to perform different tasks to put in practice the theory from day one. The way the intrusion attributes, properties and artifacts are mapped to the Kill Chain, Diamond Model and Courses of Action were really useful. Because the frameworks discussed are complementary they can be combined in order to produce multi-dimensional analysis of the intrusion. I think this multi-dimensional approach to intrusions gives great insight about the adversary. Although a time consuming exercise it was great to get a feeling about what a CTI analyst might do in a organization with high security risk that need to have mature and dedicated teams to perform this type of work.

By leveraging the intelligence gained overtime during the analysis of multiple intrusions we start to get an understanding about commonalities and overlapping indicators. Mapping these commonalities and indicators to the intrusion kill chain and diamond model results in a structural way to analyze multiple intrusions. By repeating this process we could characterize intruders activity by determine the tactics, techniques and procedures on how the attackers operate i.e., perform a campaign analysis. This was day three. A meticulous work that is built over time and needs great amount of support from your organization but after execution it will produce great insight about the adversary. In terms of tools, the exercises relied heavily on Excel and the fantastic and open source Maltego.

Day four was focused on the different collection, sharing and ingestion methods of threat intelligence. The primary method of collection discussed was trough threat feeds. Other collection methods such as OSINT and threat Intel produced inside the organization or received trough circles of trust were also discussed. For sharing, a key take away is that partners with strong non disclosure agreements are very efficient. Still, in the sharing realm delivering context is crucial in order to make it actionable. Furthermore, we discussed the roles of the different ISAC and other organizations. Regarding the ingestion, the material has very good coverage on the different standards and protocols that have been developed in recent years to collect share and consume technical information in an automated way. The focus was on STIX, TAXII. We also reviewed other methods and standards such as OpenIOC and Yara rules. In regards to the tools and exercises we had the chance to play with Recorded Future and Threat Connect and and develop OpenIOC and Yara rules. SANS posture overtime has been always vendor neutral but I must say the Recorded Future demo for OSINT is well worth and the tool is really amazing!

The material on day five is more abstract. Special focus on how people – analysts – make conclusions. For example we discussed the difference between observations and interpretations and how to construct assessments. Great amount of material about cognitive biases and how it might influences the work performed by an analyst. During this day we were also exposed to the analysis of competing hypotheses (ACH) methodology by former CIA analyst Richards J Heuer, Jr. The exercises were really interesting because we had to evaluate hypotheses against the evidences we found during the intrusion analysis of the different scenarios. By the end of the day we immersed into the topic of attribution and discussion about nation state capabilities and the different cases that have been known in the industry.

Of course apart of the training, was great to attend the DFIR Summit, absorb information, play DIFR NetWars and more important meet new people, share experiences and see good old friends!

The traditional security monitoring and incident response (IR) capability that has being used across the enterprises in the last decade has fallen behind. It is consensus across the IT security industry that we need a more robust, capable and efficient security monitoring and IR framework. The new framework should enable us to combine security and intelligence functions. An intelligence driven security that allows us to plan for, manage, detect and respond to all categories of threats even as they become more frequent and severe. In other words we want to maximize the organization effectiveness and efficiency to block, detect and respond to attacks. How? By introducing into the traditional security stack the threat intelligence security function we can do more and better.

What is Cyber Threat Intelligence?
Threat intelligence is a recent paradigm in the IT security field that continues to gain a lot of traction due to a change of focus in the risk equation from the vulnerability into the threat. Tracking threats that are specific to your industry, organization or region is key to minimize damage that can caused by an attack.

On the one hand we have strategic threat intelligence. A capability that needs processes, tools and people to leverage an understanding about the attacker’s capabilities and intents. Is normally delivered through reports that are produced by humans and consumed by humans and is the most expensive and hardest to produce. It produces information to support well informed decisions of long-lasting importance such as which policies and processes should change. Or what new changes one should accommodate in the security infrastructure to adapt to the new threat landscape.From a well-established and mature strategic threat intelligence practice you should be able to get answers to questions like: Who is your potential adversary? What is the adversary’s capability to cause you harm? Do they have the intent to cause harm? Where are you vulnerable? How could anyone harm your organization if they wanted to do so?

On the other hand, we have tactical threat intelligence. A capability that aids the prevention, detection and response competencies with real time threat data that is consumed across different systems and functions. Data such as IP addresses, domain names, URLs, email addresses, hashes values, HTTP user agents, registry keys, etc. Remnant pieces of information left by an attacker that can be used to identify threats or malicious actors. These pieces of information are nowadays called indicators of compromise and can, for example, be used to search and identify compromised systems. This thread data is tactical threat intelligence and is of limited life span. Tactical threat intelligence should be disseminated, integrated and consumed in an automated fashion. This type of threat intelligence is the cheapest and easiest to create.

What is the value of Cyber Threat Intelligence?
At the strategic level, the value proposition of threat intelligence might include:

Make well informed decisions on where you are spending your security dollars.

Create comprehensive insight about the threats by developing facts, findings and forecasts about threat actor’s capabilities, motives and tradecraft.

Create recommended courses of action on how to adapt to the evolving threat landscape in order to reduce and mitigate risks.

Being able to plan for, manage and respond to all categories of threats – even as they become more frequent and more severe.

Develop situational awareness about capabilities and intents of your adversaries.

Know your adversary and what are they looking for.

At the tactical level, the value proposition of threat intelligence might include:

Minimize the risk of attacks that could result in lost revenue, public embarrassment, and regulatory penalties.

Improve the effectiveness and efficiency of security monitoring capabilities by integrating and matching threat intel data.

What are the sources of Cyber Threat Intelligence?
The sources might vary depending if you are a consumer or a producer of threat intelligence. From a consumer perspective – where the majority of the organizations fit in – they mainly fall into two categories. The open source ones that are free and can be retrieved by anyone. And the closed sources that are commercial or with restricted access. These ones often need a payed subscriptions or being member of a closed circle of trust. Either one, they fall under tactical threat intel when data is delivered to the consumer trough feeds with indicators of compromise. Or they fall under strategic threat intel when the deliverables is a report about capabilities and intents of malicious actors.

From a producer perspective the sources are even broader and using different disciplines. Normally, if you are a service provider there is the incentive to produce it using the most variety of sources, methods and disciplines. Mainly due to the fact service providers do it for a profit. For example, iSight Partners, Dell SecureWorks, Mandiant or CrowdStrike are good examples of service providers that create strategic and tactical threat intelligence combined together. They have dedicated teams of researches that perform all kinds of activities, some of which might be almost considered under intel agencies or law enforcement umbrella. Examples of sources used across producers are honeypots and spam traps that are used to gather intelligence trough the knowledge and information that one could obtain by observing, analyzing and investigating the attacker that are lured to it. Another source could be the output of doing static and dynamic malware analysis.

Back in 2011, market research companies like IDC, Forrester and Frost & Sullivan were making market analysis about the growth of cyber threat intelligence services and alike. Their analysis stated a double digit growth year of year. Their projections seem reasonable and their current estimations continue in this trajectory. Nowadays, cyber threat intelligence continues to gain a lot of traction and hype across IT security. However, as many other cases in the IT security, the industry is adopting the jargon used across government agencies and military forces. That being said I wanted to write about cyber threat intelligence. But I thought would be good to first read and understand what intelligence means across the intelligence agencies and military domains in order to have good foundation before applying it to cyber. Below short summary I made on what intelligence is and what the 5 steps of the intelligence are.

What is Intelligence?

Intelligence is the product that results from a set of actions that are performed to information. Traditionally, used across governmental organizations for the purpose of national security. The actions are collect, analyze, integrate, interpret and disseminate. The final product of intelligence gives value-added, tailed information that provides an organization or its adversary, the ability to make conclusions. For the enterprise the information product might be to seek information about the threat actors means, motive and capabilities. On the other hand the adversary might want to seek information about intellectual property (patents, copyrights, trademarks, trade secrets, etc) from your company in order to gain economical advantage or to subvert its interests. In any of the cases the information produced gives an edge, a competitive advantage to you or to your adversary.

The information produced contains facts, findings and forecasts that supports yours or the adversary goals. There are two categories of Intelligence. One is strategic and the other is operational. Strategic intelligence means information produced to support well informed decisions of long-lasting importance. Strategic intelligence is broader and often requires information concerning different fields. Operational intelligence is of limited life span and it to be used rapidly and is concerned with current events and capability.

What are the 5 steps of the Intelligence cycle?

Planning and direction – This is the first step. It’s here were the requirements and priorities are set. The capabilities to produce Intel are limited as any other resource which means we want to maximize its production with a constant number of resources. Among others, a methodology to define the requirements might be using the “Five W’s”. It’s also in this step where we define which areas the intelligence produced will have the most impact and make to most contribution. During the planning is fundamental to specify which categories of Intelligence will be gathered i.e. OSINT (Open Source Intelligence). In addition, the processes, people and technology to support the different steps in the cycle need to be established with clear roles and responsibilities.

Collection – The second step includes all the different activities, mainly research, that involves the collection of data to satisfy the requirements that were defined. The collection can be done either via technical or human means and involves gathering data from a variety of sources. In the military and intelligence community the sources normally used are people, objects, emanations, records. These sources span the different collection disciplines named as HUMINT, IMINT, MASINT, SIGNT, OSINT and others. Once collected, information is correlated and forwarded for processing and production.

Processing and exploitation – Third step, the collected raw data starts to be interpreted, translated and converted into a form suitable for the consumers of the intelligence. The raw data becomes information.

Analysis and production – The refinement of the information that was produced in the previous step. The fusion of the different information that was processed from the different intelligence disciplines. These are key tasks performed during this step. The analysis consists of facts, findings and forecasts that describe the element of study and allow the estimation and anticipation of events and outcomes. The analysis should be objective, timely, and most importantly accurate. To produce intelligence objectively, the analysts apply four basic types of reasoning. Induction, deduction, abduction and the scientific method. Furthermore, because bias and misperceptions can influence the analysis the analyst should be aware of the different analytical pitfalls. The outcome is value-added actionable information tailored to a specific need. For example, in the United States, creating finished intelligence for national and military purposes is the role of the CIA.

Dissemination and Integration – Essentially, this step consists in delivering the finished product to the consumers who requested the information. This can be done using a wide range of formats and in a manual or automated manner.

20 days have passed since my last post about how to do a live memory acquisition of a windows system for malware hunting and forensics purposes. In that article, I explained the details on how to create a collector, collect the data, and import the data into Mandiant Redline. The second part will be about the investigation and how to look for threats using indicators of compromise (IOC). However, before part II , I would like to give a brief introduction to IOCs.

For those who never heard about indicators of compromise they are pieces of information that can be used to search and identify compromised systems . These pieces of information have been around since ages but the security industry is now using them in a more structural and consistent fashion . All types of companies are moving from the traditional way of handling security incidents. Wait for an alert to come in and then respond to it. The novel approach is to take proactive steps by hunting evil in order to defend their networks. In this new strategy the IOCs have a key role. When someone compromises a systems they leave evidence behind. That evidence, artifact or remnant piece of information left by an intrusion can be used to identify the threat or the malicious actor. Examples of IOCs are IP addresses, domain names, URLs, email addresses, file hashes, HTTP user agents, registry keys, a service configuration change, a file is deleted, etc. With this information one could sweep the network/endpoints and look for indicators that the system might have been compromised. For more background about it you can read Lenny Zeltzer summary. Will Gragido from RSA explained it well in is 3 parts blog here, here and here. Mandiant also has this and this nice articles about it.

Now, different frameworks and taxonomy exist in the security industry in order to deal with IOCs. These frameworks are important in order to share information in a consistent, scalable, automated and repeatable way across different organizations. One initiative is the OpenIOC sponsored by Mandiant. OpenIOC uses an extensible XML schema that allows to describe the technical characteristics of an intrusion or malicious actor. Another initiative is from the IETF Working Group who defined two standards. One for describing the observables of security incidents which is The Incident Object Description Exchange Format (IODDEF) described in RFC 5070. The other is the Real-time Inter-network Defense (RID) described in RFC 6545 and is used to transport and exchange the IODEF information. Other initiative is from MITRE that developed CyboX, STIX, and TAXII , all free for the community and with high granularity. In order to read more about these initiatives Chris Harrington from EMC Critical Incident Response Center has a nice presentation about it. Other resource is a very interesting study made last October by ENISA named Detect, SHARE, Protect – Solutions for Improving Threat Data Exchange among CERTs.

That being said, we can now start using these IOCs to defend our networks. One way is by gathering information from the investigations made by security researches or vendors with actionable intelligence. For example back in September 2013 the campaign “ICEFOG : A tale of cloak and three daggers” was released by Kaspersky. This report contains great technical details and significant amount of actionable information. Another example was the NetTraveler campaign which has been disclosed in June 2013. This report describes a piece of malware was used to successfully compromise more than 350 high-profile victims across 40 countries. The report is well written and contained great technical details. On chapter 5 it presents a huge list of IOCs to help detect and eradicate this threat. Following that, Will Gibb from Mandiant converted the information from the NetTraveler report into the OpenIOC format. With this IOCs one could import it into Redline. Of course this was an effort made by a vendor to incentive the usage of his format but others could use any other standard or framework to collect this observable’s and turn them into actionable information.

On my next post I will show how to import IOCs in OpenIOC format into Redline and find Evil on my wife’s laptop!

The CVE November Awareness Bulletin is an initiative that aims to provide further intelligence and analysis concerning the last vulnerabilities published by the National Institute of Standards and Technology (NIST), National Vulnerability Database (NVD) and the IDS vendors’ coverage for these vulnerabilities.

Common Vulnerabilities and Exposures (CVE) is a public list of common names made available by MITRE Corporation for vulnerabilities and exposures that are publicly known.

This is the most popular list of vulnerabilities and is used as a reference across the whole security industry. It should not be considered absolute but due to the nature of its mission and the current sponsors – Department of Homeland Security (DHS), National Cybersecurity and Communications Integration Center (NCCIC) – it is widely adopted across the industry.

Based on this public information I decided to take a look at what has been released during the month of November. There were 389 vulnerabilities published where 56 were issued with a Common Vulnerability Scoring System (CVSS) score of 8 or higher – CVSS provides a standardized method for rating vulnerabilities using a scoring system based on their different properties from 1 to 10. From these security vulnerabilities, I compared the last signature updates available from products that have a significant share of the market i.e., Checkpoint, Tipping point, SourceFire, Juniper, Cisco and Palo Alto. The result is that SourceFire has the best coverage with 23%. TippingPoint, Checkpoint and Juniper rank second with 16%. Cisco ranked third with 12% followed by Palo Alto with 0%

The following graph illustrates the mapping between the CVEs published in November with a CVSS equal or higher than 8 by vulnerability type and the vendor coverage:

In addition to looking at all the vulnerabilities released, it is also essential to look into detail for specific coverage like Microsoft products vulnerabilities. On the 12th of November the Microsoft Security Bulletin (a.k.a Patch Tuesday) announced 25 vulnerabilities. From these 12 have a CVSS score equal or higher than 8. From these the vendor coverage is shown in the following table:

The vendors analyzed have provided signatures on the same date (12 of November) or few days later. The mentioned signatures and patches should be applied as soon as possible but you should also fully evaluate them (when possible) before applying it production systems.

In addition to that, following signature update deployment, you should always check which signatures have been enabled by default. Plus you should be evaluating what is the impact in your environment for the CVEs that don’t have coverage.

Bottom line, the vendors that were analyzed have a quick response but the coverage should be broader. September we saw 56 vulnerabilities with a CVSS higher than 8 but only 23% of them have coverage in the best case (SourceFire). This means 77% of the published vulnerabilities don’t have coverage. Regarding the vendor response to the Microsoft Security Bulletin Summary for November 2013, the coverage is better and goes up to 100% in the best case (SourceFire). Interesting to note that some of these vulnerabilities are related to software that don’t have significant share in the market. Even if the vendors would have 100% coverage they would not apply to all environments. Furthermore, the likelihood of these vulnerabilities to be successful exploited should also be considered since some of them could be very hard to pull off. So it’s key that you know your infrastructure, your assets and mainly where are your business crown jewels. Then you should be able to help them better protect your intellectual property and determine will be the impact if your intellectual property gets disclosed, altered or destroyed.

The CVE October Awareness Bulletin is an initiative that aims to provide further intelligence and analysis concerning the last vulnerabilities published by the National Institute of Standards and Technology (NIST), National Vulnerability Database (NVD) and the IDS vendors’ coverage for these vulnerabilities.

Common Vulnerabilities and Exposures (CVE) is a public list of common names made available by MITRE Corporation for vulnerabilities and exposures that are publicly known.

This is the most popular list of vulnerabilities and is used as a reference across the whole security industry. It should not be considered absolute but due to the nature of its mission and the current sponsors – Department of Homeland Security (DHS), National Cybersecurity and Communications Integration Center (NCCIC) – it is widely adopted across the industry.

Based on this public information I decided to take a look at what has been released during the month of October. There were 582 vulnerabilities published where 78 were issued with a Common Vulnerability Scoring System (CVSS) score of 8 or higher – CVSS provides a standardized method for rating vulnerabilities using a scoring system based on their different properties from 1 to 10. From these security vulnerabilities, I compared the last signature updates available from products that have a significant share of the market i.e., Checkpoint, Tipping point, SourceFire, Juniper, Cisco and Palo Alto. The result is that Juniper, SourceFire and TippingPoint has the best coverage with 13%. Checkpoint and Cisco rank second with 12% whereas was the last Palo Alto with 1% coverage.

The following graph illustrates the mapping between the CVEs published in October with a CVSS equal or higher than 8 by vulnerability type and the vendor coverage:

In addition to looking at all the vulnerabilities released, it is also essential to look into detail for specific coverage like Microsoft products vulnerabilities. On the 8th of October the Microsoft Security Bulletin (a.k.a Patch Tuesday) announced 27 vulnerabilities. From these 14 have a CVSS score equal or higher than 8. From these the vendor coverage is shown in the following table:

The vendors analyzed have provided signatures on the same date (8 of October) or few days later. The mentioned signatures and patches should be applied as soon as possible but you should also fully evaluate them (when possible) before applying it production systems.

In addition to that, following signature update deployment, you should always check which signatures have been enabled by default. Plus you should be evaluating what is the impact in your environment for the CVEs that don’t have coverage.

Bottom line, the vendors that were analyzed have a quick response but the coverage should be broader. September we saw 78 vulnerabilities with a CVSS higher than 8 but only 13% of them have coverage in the best case (SourceFire, Juniper and TippingPoint). This means 87% of the published vulnerabilities don’t have coverage. Regarding the vendor response to the Microsoft Security Bulletin Summary for October 2013, the coverage is better and goes up to 30% in the best case (Juniper, SourceFire and TippingPoint). Interesting to note that some of these vulnerabilities are related to software that don’t have significant share in the market. Even if the vendors would have 100% coverage they would not apply to all environments. Furthermore, the likelihood of these vulnerabilities to be successful exploited should also be considered since some of them could be very hard to pull off. So it’s key that you know your infrastructure, your assets and mainly where are your business crown jewels. Then you should be able to help them better protect your intellectual property and determine will be the impact if your intellectual property gets disclosed, altered or destroyed.