Open source software security

If a Vulnerability Falls in the Forest

If a vulnerability is discovered in your codebase, but it's not exploitable, does it make a sound? I recently ran into this dilemma while investigating an add-on module to a popular open source package. The module appeared to have a vulnerability, but upon further investigation, I discovered that the code base of the package, rather than the module, contained the vulnerability. It turns out that you could mitigate the vulnerability by adjusting the module code to prevent it, but the underlying vulnerability still remained. The question then became - should this vulnerability be fixed?

The arguments for not fixing the vulnerability center around the fact that disturbing the core code base could introduce bugs, disrupt existing modules, or have other unintended consequences. Obviously remediation of the vulnerability in the code core would require more resources, and would potentially have wider ranging consequences. Remediating the vulnerability in the module would alleviate the apparent problem, and spare existing code from refactoring, debugging, or re-architecting.

This situation led me to a realization. While fixing the immediate danger, the core vulnerability remains intact, lying in wait to spring up the next time a developer carelessly invokes the code. The vast majority of the codebase is solid, so there would be little reason for a developer to suspect that utilizing the particular vulnerable API call would introduce a vulnerability. Added to this fact is the fact that now I have discovered that utilizing this API call can introduce the vulnerability, I can apply that knowledge to all future code reviews. This is a tactic that could easily be adopted by black hat attackers when looking for similar vulnerabilities. Once the vulnerable code has been revealed, because it is not fixed, it will become a target of investigators looking at other module code.

But what if the code remains, but isn't exposed by any other code? Finding the vulnerable code in an audit is near impossible, and it really only introduces vulnerability in specific scenarios. Would leaving the code alone be a safer option from a logistic standpoint? In the long run what are the chances the code could become exploited? If the vulnerability was never made public would it actually ever see the light of day? All these are questions to consider when pondering remediation strategies for discovered code vulnerabilities. Of course, responses will vary according to circumstance, but one lesson has remained constant in information security over time - attackers continue to evolve in sophistication. Leaving a vulnerability in core code could become disastrous. Then again, it could just get paved over with the next version release...