Prioritizing Vulnerability Prioritization

For many victims of an information security breach, prioritization of vulnerabilities becomes an act forced upon them a priori. “Ooops, that one was serious, we should have prioritized it!”, is the lament heard afterwards.

The root of this little evil has many seeds. F.U.D, willful ignorance, or sometimes even just plain old gross negligence, number amongst the more mundane explanations, and for these particular and related causes there is no technical or strategic remedy or prescription, being social problems as they are.

But for one of the remaining primary reasons – not knowing what or how to prioritize - there is always the good, tested and tried remedy known colloquially to us moderns as “Education”.

The first step towards understanding and prioritizing vulnerability prioritization, as confusing and counterintuitive as it sounds, is the realization that this is not an engineering problem.

I can already hear the loud and booming voices of unreason, “Poppycock! Of course it’s an engineering problem”. So, please let me elaborate. Naturally, the process of analyzing and remediating vulnerabilities is an engineering task, and the methods, approaches and tools required are of course also of the engineering type. But the actual underlying driver for Vulnerability Management, is not a engineering one. The Driver is risk.

The reason why vulnerabilities have to be patched at all, is because they are actively exploited and abused by parties with malicious intentions. This driver is external – it is a quantifiably unknown factor – and it is not just a theoretical driver. It is actually based on developments outside of your own control and with a pace and path that is unpredictable. The only aspect you can have any amount of control on is how and what you patch, and what failsafe mechanisms you can provide for those that are left, including Zero-days.

The risk is that whilst you have an unpatched vulnerability, someone will actually actively exploit it. Veterans from yesteryear still believing that Security through Obscurity, i.e. not thinking oneself important enough to present a target, have entirely failed to take note of, learn from and act on the developments in the threat landscape in the last decade, and should consider changing their job from one in security engineering to something more suitable for that sort of reasoning – like astrology. Malware, Script Kiddies, automated scanning and a whole plethora of other threats are blind and frustratingly deaf to that argument.

The average company looks at vulnerability remediation prioritization as a task whereby X assets need Y patches to be patched in Z amount of time, where the Z is a period of time denoted in Months. We have to patch 200 Systems with 1976 Patches in 6 months. The period is usually based on either a compliance requirement (a sure sign indicator that the underlying reason for compliance is only being poorly understood) or on the calculation of what is deemed manageable.

When we consider the actual driver, the amount of mental mistakes inherent in the approach just outlined are almost too great to poke fun of in detail, but I will do my utmost to try and dismantle the most glaring ones.

The first fallacy is that by approaching the topic by a date schedule every single vulnerability is treated equal. They are not. One can have no actual practical impact at all, another can cost you the company reputation, intellectual property, customer data, or your job.

Prioritizing vulnerability remediation should be based on the severity and impact of the vulnerability, and this next part is important, specifically how it relates to and can affect you in your environment. There are very few solutions that can do that for you. They can assist you with this process, by providing tools and mechanisms to identify and manage your assets and associated risks, but they are not telepathic or magical. Whether the affected assets are on the perimeter and publicly accessible, whether they are located in the research and development facility, how trivial it is to execute the attack or whether successful exploitation can result in administrator privileges. These are the criteria that determine the true risk and provide a clear picture of what needs to be done first and with urgency.

This should be combined with situational awareness – sadly a misused and abused buzzword and an misunderstood tool – but one that cannot be disregarded. When it comes to vulnerability management, there are two situational awareness factors that are of the highest importance, especially when attempting to prioritize. The first of these is the question of whether a vulnerability is a zero-day, i.e. has the vendor released a patch or mitigating workaround.

This may confuse people a little, but the data required to determine if you are affected is still to be found in your Vulnerability Management Solution or Framework. A good vulnerability management solution should still inform about zero-days, even if it cannot actively check for them. If a given solution only adds a detection once a patch or a fix has been released, you will need another solution to inform you about Zero-days, whether subscribing to a commercial feed, or advisory mailing lists.

The second is exploit availability. It has to be said at this point, that even though the security industry has become really good at getting this type of information, many vulnerabilities are exploited long before if on a far smaller scale and there should be general approaches to deal with zero-days such as log analysis. But exploit availability should trigger the highest level of risk mitigation. From that point onward, it is not a hypothetical scenario any more, it’s a potential one.

Note that you need to know that an exploit is available – you do not need to exploit it yourself. Realistically, you do not want to add even more overheads to a time-critical process that also requires a very specialized skillset and adds little actual value. It is also nigh to impossible to cover every vulnerability with a PoC, especially in a single solution. That budget is better spent elsewhere to better effect. A penetration test has its time and place, but that’s not here. You just need to be aware that a working exploit is available for a vulnerability.

Now to approach the second great fallacy in the standard approach as it is sadly often practiced. Whatever Periods are chosen to assess vulnerabilities and apply mitigation, they need to be as short as possible. The longer it takes, the larger the window of opportunity for an attacker to strike. This is anecdotal, but doing this quarterly, bi- or annually is commonly done. I know of at least one large IT services company, that only patch Linux Systems quarterly, for example. Not only does this open a risk window for potentially 2 months and 30 days, it is also deducible externally. An attacker can determine this by careful scanning and easily work it out. This approach is based entirely on cost – not on risk. It entirely missed the topic and frames the problem incorrectly.

It is not a question of how many patches you can apply in a given period. That is not really up to you and is what you are doing vulnerability management for - to determine how many there are and you have to be able to apply.

If this exceeds your available resources, then you need to do active prioritization and actually ensure you have a living process to address the really critical ones in a manner that reflects the actual risk and urgency in the interim. There are different variations on the approach on how to do this, but all rely on a classification and analysis of the involved assets and the impact of the vulnerability itself – how it is expressed, how it can be used and what the consequences are. But if you do not prioritize the prioritization itself, you are not really prioritizing at all.

The end result is what I like to term “Badly Secure”. On paper, everything looks as though it should be secure and all best practices are being followed, but essentially in practice it’s gates wide open. The budget has been spent, resources allocated, yet the end result is as though nothing was done at all. People forget that security processes are intended to provide security and manage risk.

Oliver Rochford is the Vice President of Security Evangelism at DFLabs. Oliver is a recognized expert on threat and vulnerability management as well as cyber security monitoring and operations management. He previously worked as research director at Gartner. He has worked as a security practitioner and white hat hacker for Tenable Network Security®, HP Enterprise Security Services, Verizon Business, Secunia® (now Flexera Software), Qualys®, and Integralis (now part of NTT Com Security).