No one doubts that artificial intelligence AI and machine learning ML will transform cybersecurity We just dont know how or when While the literature generally focuses on the different uses of AI by attackers and defenders and the resultant arms race between the two I want to talk about software vulnerabilities All software contains bugs The reason is basically economic The market doesnt want to pay for quality software With a few exceptions such as the space shuttle the market prioritizes fast and cheap over good The result is that any large modern software package contains hundreds or thousands of bugs Some percentage of bugs are also vulnerabilities and a percentage of those are exploitable vulnerabilities meaning an attacker who knows about them can attack the underlying system in some way And some percentage of those are discovered and used This is why your computer and smartphone software is constantly being patched software vendors are fixing bugs that are also vulnerabilities that have been discovered and are being used Everything would be better if software vendors found and fixed all bugs during the design and development process but as I said the market doesnt reward that kind of delay and expense AI and machine learning in particular has the potential to forever change this trade-off The problem of finding software vulnerabilities seems well-suited for ML systems Going through code line by line is just the sort of tedious problem that computers excel at if we can only teach them what a vulnerability looks like There are challenges with that of course but there is already a healthy amount of academic literature on the topic -- and research is continuing Theres every reason to expect ML systems to get better at this as time goes on and some reason to expect them to eventually become very good at it Finding vulnerabilities can benefit both attackers and defenders but its not a fair fight When an attackers ML system finds a vulnerability in software the attacker can use it to compromise systems When a defenders ML system finds the same vulnerability he or she can try to patch the system or program network defenses to watch for and block code that tries to exploit it But when the same system is in the hands of a software developer who uses it to find the vulnerability before the software is ever released the developer fixes it so it can never be used in the first place The ML system will probably be part of his or her software design tools and will automatically find and fix vulnerabilities while the code is still in development Fast-forward a decade or so into the future We might say to each other Remember those years when software vulnerabilities were a thing before ML vulnerability finders were built into every compiler and fixed them before the software was ever released Wow those were crazy years Not only is this future possible but I would bet on it Getting from here to there will be a dangerous ride though Those vulnerability finders will first be unleashed on existing software giving attackers hundreds if not thousands of vulnerabilities to exploit in real-world attacks Sure defenders can use the same systems but many of todays Internet of Things systems have no engineering teams to write patches and no ability to download and install patches The result will be hundreds of vulnerabilities that attackers can find and use But if we look far enough into the horizon we can see a future where software vulnerabilities are a thing of the past Then well just have to worry about whatever new and more advanced attack techniques those AI systems come up with This essay previously appeared on SecurityIntelligencecom