A Q&A with the authors of Data-Driven Security: Analysis, Visualization and Dashboards

George V. Hulme | April 1, 2014

If you want to know what the state of the art is when it comes to using data to help secure systems, no analysis would be complete without speaking with both Bob Rudis and Jay Jacobs, co-authors of Data-Driven Security: Analysis, Visualization and Dashboards. The book is about improving security visibility into the enterprise.

We were interested in hearing their thoughts on how enterprises gain and improve visibility into the risks associated with third-parties. So we recently caught up with the duo for a talk on the subject.

George: Bob, can you tell us how you’ve seen large organizations manage third-party risk?

Bob: Many organizations have formalized a third-party risk management program under a central risk management function and developed a strategy for coordinating third-party risk management across the enterprise rather than it being managed individually by specific strategic business units.

There are many different aspects to look at when assessing third-party risk. For example, one of the third-party risks organizations have is associated with providers they outsource business processes to or use in a software as a service capacity. Whenever an organization entrusts others with their data they need to look at that arrangement from both an application security and “traditional” data security perspective. Resultant risks could range anywhere from large infrastructure interconnectivity issues or “just” PII they may touch.

A good third-party risk management program tries to get a handle on understanding what services are provided to an organization and then works to build and maintain a catalog of those services. All of the “ins and outs” associated with those services must be enumerated and that will, in turn, essentially help prioritize how to manage each risk based upon the business tier-level (e.g. dollar amount of business transacted) or the level of perceived threat or vulnerability.

George: For the high priority third-party providers, do you encourage organizations to perform annual or periodic reassessments?

Bob: In a large, distributed organization different strategic business units could have their own set of priorities as it pertains to third-party risk and that might cause each business unit to overlook the need for regular assessment reviews in certain circumstances. By centralizing and standardizing on an approach (as I indicated earlier) it’s possible to set criteria for tiers that define and prioritize third parties into “tiers”, considering all factors of risk that are important across the enterprise. For most important (i.e. “tier one”) third parties, an annual review should be a non-negotiable requirement.

Reviews should also not be limited to questionnaires. To be truly effective at managing risk, organizations should conduct application-level testing and go onsite at least annually (though, application testing is a bit of a contentious industry issue at the moment). It’s also important to perform reviews whenever there are large changes to master service agreement contracts that might cause either a lower-tier organization to move up or introduce new risks that were not part of previous assessments.

George: Jay, what was your experience with third-party risks, and do you see trends along these lines in the Verizon DBIR report?

Jay: For third party risks, five years ago, we didn't even really try to capture attackers going through a third party. What we focused on initially were third parties as the attacker. Someone at that company coming into our network and doing bad. I think that was the mentality back then.

Now what we're seeing is the attackers are looking for the weakest link in the chain and that could be a third-party. And if you're talking about a financially motivated attacker, they really aren't even targeting any specific victim; they're targeting anybody with money. What we're seeing there is those attackers are conducting opportunistic attacks across the Internet trying to infect systems, trying to get information, and once they get some information, they're actually starting to use it. They’ll get into an organization and say, "What can we get from this organization? Do they have any money? No. Well, whom do they talk to that has money?"

Then if we talk about espionage, I think as far back as we've been looking at attackers motivated by espionage they've been using the third-party trust for phishing and similar attacks. They try to establish where the trust is and they’re going directly after third-party websites.

One of the big takeaways from the DBIR last year was the focus on small organizations falling victim to espionage. The state-affiliated espionage. The best we could guess is that those are third parties that provide access to the big fish. The small engineering companies, even less than ten people, some of the manufacturing companies, they have the plans for some of this equipment and they're implementing it on these machines, and yet they are a much softer target. They're not 2,000 employees with a dedicated security staff, they're ten people in a garage.

George: That's very interesting. Bob how would one account for the security of smaller organizations that may not be staffed as thoroughly as a large enterprise?

Bob: In this “as a service” world, it’s very likely organizations will be doing business with many smaller players. If the level of access to systems or data is sensitive enough, those smaller orgs should be made into “tier-one” providers in terms of security classification and that means treating them no differently than a much larger organization since they are working with your data or your systems.

George: That's the danger today. With that kind of interconnection, how about response and coordination with third parties. How does that come into play for a mutual party response, or is it mostly a notification when you're talking about response with third parties?

Bob: It’s vital to capture contact information and define breach response protocols for both sides in contract terms and conditions (i.e. when establishing a master services agreement) when procurement engages in a new third party relationship. Organizations should work to ensure there are well-defined requirements for third parties to notify them within certain periods of time (and vice-versa). Obviously, there will be unique/customized language and conditions to each contract.

Whether third party providers live up to those notifications or not is difficult for an organization to actually ensure, but if one looks at state, regional or other public disclosure notices and sees a breach happen in company X and they know that X is on their third-party roster but haven't been informed about it, they can then work to validate that their data was not involved. In a perfect world, a delay in notification could just be that a third party just hasn't gotten down to your organization’s letter in the alphabet yet on the “to notify” list.

George: Speaking of which, what are the indicators an organization might look for to vet the security of their partners?

Bob: That can be highly dependent on the organization. For “tier one” third-parties, those are the ones who really should be assessed on site to see what their security program and security posture looks like.

If readers have ever been a on the receiving end of one of those audits, they know the drill. There’s a checklist of things that one looks for and questions that will need to be answered in person. Since no organization is perfect, it’s also important to look for increasing capabilities in different areas over time as deficiencies are identified.

For both virtual and on-site inquiries, the Shared Assessment Group has a great framework that can be used and there are many, many security consulting organizations that can be used to perform these assessments if organizations to not have the time or internal talent to perform them on their own.

However, the reality is this whole “point-in-time” or “once-a-year” review is really not going to cut it. It's obvious it doesn't work for the payments industry. And, frankly, there are a whole bunch of other compliance initiatives it doesn't work for as well.

We think that there will ultimately have to be some type of continuous monitoring framework developed that each organization can use to continually assess risk. It could be something as high-level as seeing outbound traffic to known malicious command and control servers, how external assets are configured or other known risky behaviors that would indicate something problematic is going on (i.e. something that a company like BitSight provides). It could also be something as involved as regular infrastructure and application inspection via hardware/software on premises. We’re not suggesting that every enterprise needs to be perfect to actually do business with, but one should be working towards a consistent and continuous data-driven approach to gauge the risks associated with third party relationships.

# # #

Biographies

Jay Jacobs

Jay Jacobs has over 15 years of experience within IT and information security with a focus on cryptography, risk, and data analysis. As a Senior Data Analyst on the Verizon RISK team, he is a co-author on their annual Data Breach Investigation Report (DBIR) and spends much of his time analyzing and visualizing security-related data. Jay is a co-founder of the Society of Information Risk Analysts and currently serves on the organization's board of directors. He is an active blogger, a frequent speaker, a co-host on the Risk Science podcast and was co-chair of the 2014 Metricon security metrics/analytics conference. Jay can be found on twitter as @jayjacobs. He holds a bachelor's degree in technology and management from Concordia University in Saint Paul, Minnesota, and a graduate certificate in Applied Statistics from Penn State.

Bob Rudis

Bob Rudis has over 20 years of experience using data to help defend global Fortune 100 companies. Bob is a serial tweeter (@hrbrmstr), avid blogger (rud.is), author, speaker, and regular contributor to the open source community (github.com/hrbrmstr). He currently serves on the board of directors for the Society of Information Risk Analysts, is on the editorial advisory board of SANS Securing The Human program and was co-chair (with Jay) of the 2014 Metricon security metrics/analytics conference co-located with the RSA Conference. He holds a bachelor's degree in computer science from the University of Scranton.

Suggested Posts

Following an increase in ransomware cyber attacks, most notably May 2017’s WannaCry attack, U.S. public sector entities are starting to see the effects of these attacks on the almost $4 trillion municipal debt market. As a result, issuers...

This year marked another great Gartner Security & Risk Management Summit with over 3,000 attendees, bringing together CEOs, CIOs, CISOs, IT Directors, Risk Managers, and other risk and security professionals to National Harbor, MD from...

Stress and worry are emotions that are often linked with the period between the beginning of a new year and mid-April, the federal tax filing deadline. Modern technology has brought with it techniques and applications that reduce this...