His Milwaukee-area information security firm, Hold Security LLC, has identified some of the largest cyber-security breaches of the past couple years.

In October 2013, Hold Security worked with a journalist to discover that a hacker group had put on an unprotected server the source code for key Adobe Systems software products such as Acrobat Reader, Acrobat Publisher and ColdFusion.

Hold Security’s grand slam home run, though, came in August 2014. That’s when it revealed that a Russian crime ring had obtained 1.2 billion combinations of user names and passwords, along with more than 500 million email addresses, according to The New York Times.

So who better to talk about what’s ahead for cybercrime than Holden, the company’s Chief Information Security Officer?

In the first part of my two-part email interview, Holden discusses where hackers look for vulnerabilities, along with what the testing industry can do better to help prevent bugs and hacks in the first place.

Service Virtualization.com:Several mega-bugs from 2014 came via open-source software, notably the Heartbleed vulnerability in OpenSSL. What, if anything, can be done to better test open-source software for problems such as Heartbleed?

Holden: We are in an age of mass exploitation of vulnerabilities.

Hackers are picking specific targets much less, and instead focusing on industries, major technologies, and other components that can be exploited in bulk. Based on this evolution of the hackers, “mega-bugs” have a more devastating effect across the board.

It is improbable that all major bugs, like Heartbleed or Shellshock, have already been found. But the open-source community has been working diligently to find and minimize the impact of many vulnerabilities.

Continuous testing based on best practices and facilitating responsible vulnerability disclosure should improve overall vulnerability discovery and help to minimize the patch cycle.

What types of devices or software do you see the mega-bugs of 2015 emanating from, and why? Do you think they will come, say, via open-source software, embedded software, accessories or mobile devices? Or someplace else entirely?

In general, hackers tend to go for the “lowest-hanging fruit.” They are more likely to abuse a researcher-discovered vulnerability than identify their own. However, we are seeing a fast rise in mobile malware. As we use our phones more and more in place of our computers, they are now even serving as our digital “wallets” – and mobile devices are more likely to be possible vectors of increased attacks.

If you could change anything in how software is currently tested for bugs and vulnerabilities, what would that be, and why would you choose it?

In some cases where vulnerabilities arise, we see a lack of realistic testing of the applications.

Sometimes developers just test the framework without all the modules, components, or data being populated. In other cases, the testing happens around the perimeter of the application, but not for authenticated users, administrators, or auxiliary users.

Very similarly to (quality assurance) testing, where each component needs to be tested from the user’s perspective, security testing needs to be done that way as well.

It is a foregone conclusion that the hackers will have access to advanced functions of an application with elevated privileges — if not within your infrastructure, then within somebody else’s.

Facing the fact that no software is perfect, and therefore it can be potentially exploited, it is a very important realization that we need to build the right tools that will detect exploitations based on quantitative analysis.