Blog

Introducing Threat Operations: Accelerating the Human

In the first post of our Introducing Threat Operations Series, we explored the need for much stronger operational discipline around handling threats. With all the internal and external security data available, and the increasing sophistication of analytics, organizations should be doing a better job of handling threats. If what you are doing isn’t working, it’s time to start thinking differently about the problem, and addressing the root causes underlying the inability to handle threats. It comes down to _accelerating the human: making your practitioners better through training, process, and technology.

With all the focus on orchestration and automation in security circles, it’s easy to conclude that carbon-based entities (yes, people!) are on the way out for executing security programs. That couldn’t be further from reality. If anything, as the technology infrastructure continues to get more complicated and adversaries continue to improve, humans are increasing in importance. Your best investments are going to be in making your security team more effective and efficient at the ever-increasing tasks and complexity.

One of the keys we discussed in our Security Analytics Team of Rivals series is the need to use the right tool for the job. That goes for humans too. Our security functions need to be delivered via both technology and personnel, letting each do what it does best. The focus of our operational discipline is finding the proper mix to address threats.

Let’s flesh out Threat Operations with more detail.

Harnessing Threat Intelligence: Enterprises no longer have the luxury of time to learn from attacks they’ve seen and adapt defenses accordingly. You need to learn from attacks on others by using external threat intelligence to make sure you can detect those attacks, regardless of whether you’ve seen them previously. Of course you can easily be overwhelmed with external threat data, so the key to harnessing threat intel is to focus only on relevant attacks.

Enriching Alerts: Once you have a general alert you need to add more information to eliminate a lot of the busy work many analysts need to perform just to figure out whether it is legitimate and critical. The data to enrich alerts exists within your systems – it’s just a matter of centralizing it in a place analysts can use it.

Building Trustable Automation: A set of attacks can be handled without human intervention. Admittedly that set of attacks is pretty limited right now, but opportunities for automation will increase dramatically in the near term. As we have stated for quite a while, the key to automation is trust – making sure operations people have confidence that any changes you make won’t crater the environment.

Workflow/Process Acceleration: Finally, moving from threat management to threat operations requires you to streamline the process and apply structure where sensible to provide leverage and consistency for staff members. It’s about finding a balance between letting skilled practitioners do their thing and providing the structure necessary to lead a less sophisticated practitioner through a security process.

All these functions focus on one result: providing more context to each analyst to accelerate their efforts to detect and address threats in the organization – Accelerating the Human.

Harnessing Threat Intelligence

We have long believed threat intel can be a great equalizer in restoring some balance to the struggle between defender and attacker. For years the table has been slanted toward attackers, who target a largely unbounded attack surface with increasingly sophisticated tools. But sharing data about these attacks and allowing organizations to preemptively look for new attacks before they have been seen by an individual organization can alleviate this asymmetry.

But threat intelligence is an unwieldy beast, involving hundreds of potential data sources (some free and others paid) in a variety of data formats, which need to be aggregated and processed to be useful. Leveraging this data requires several steps:

Integrate: First you need to centralize all your data. Start with external data. If you don’t eliminate duplicates, ensure accuracy, and ensure relevance, your analysts will waste even more time spinning their wheels on false positives and useless alerts.

Reduce Overlap and Normalize: With all this data there is bound to be overlap in the attacks and adversaries tracked by different providers. Efficiency demands that you address this duplication before putting your analysts to work. You need to clean up the threat base by finding indicator commonalities and normalizing differences in data provided by various threat feeds.

Prioritize: Once you have all your threat intel in a central place you’ll see you have way too much data to address it all in any reasonable timeframe. This is where prioritization comes in – you need to address the most likely threats, which you can filter out based on your industry and the types of data you are protecting. You need to make some assumptions, which are likely to be wrong, so a functional tuning and feedback loop is essential.

Drill Down: Sometimes your analysts need to pull on threads within an attack report to find something useful for your environment. This is where human skills come into play. An analyst should be able to drill into intelligence about a specific adversary and threat, to have the best opportunity to spot connections.

Threat intel should ultimately, when fed into your security monitors and controls, provide an increasing number of the alerts your team handles. But an alert is only the beginning of the response process, and making each alert as detailed as possible saves analyst time. This is where enrichment enters the discussion.

Enriching Alerts

So you have an alert, generated either by seeing an attack you haven’t personally experienced yet but were watching for thanks to threat intel, or something you were specifically looking for via traditional security controls. Either way, an analyst now needs to take the alert, validate its legitimacy, and assess its criticality in your environment. They need more context for these tasks.

So what would streamline the analyst process of validating and assessing the threat? The most useful tool as they drill into the alert would probably be more information. It’s useful for prioritization to have a list of attacks associated with unsophisticated attackers, for instance. It’s also convenient to recognize a set of attacks which target an application stack you don’t use – you can point all those indicators at the circular bin.

On the other hand, it matters a lot to an analyst if while investigating an alert they recognize indicators linked to financial fraud, and connect active command and control traffic patterns to a financial fraud botnet. If your organization is concerned about financial fraud (who isn’t?), this enrichment should bubble relevant alerts to the top for further investigation.

But that’s just the automated stuff. If the alert is enriched with, say, a list of devices on your network which connected to botnet-associated IP addresses, you have a handy list of devices which might have been impacted by that attack. Or if you can extract from your change management system a list if devices which have added a specific executable, and you learn later that change was part of an attack, you are well ahead of the game in terms of figuring out how an attack proliferated within your environment.

Alert enrichment requires foresight. You need to anticipate the kinds of questions an analyst will ask based on what they find in an alert. Obviously you run the risk of adding too much data, so you’ll want instrumentation to determine which data is useful and which isn’t. It helps to have a feedback loop with analysts as well, to continuously track which data is helpful.

With additional context about the severity of each threat in your environment, and types of attack, you can figure out which can be remediated easily and which can’t. The easy stuff is ripe for automation.

Building Trustable Automation

Automation remains a fairly divisive topic in security. Many operational folks are justifiably concerned about machines making changes which could result in downtime. But what choice do you have? You can’t get all the work done with the resources you have, and bringing very expensive resources to bear on activities which don’t add value is silly. To bridge the gap we focus on building trust in the automation process.

How can you establish trust? Start small. Select a set of very manageable use cases which make an operational difference but have limited downside. A good place to look is outbound network connections. If you learn of a new phishing site, typically via threat intel, you can have egress filters block connections. Simple, right? But very effective because many phishing sites are only up for a few hours, so having a staffer update the egress filter doesn’t work – things move too fast for humans. But it’s a perfect task for a machine, with limited downside because you’ll hear about any false positives quickly from impacted employees. And it would likely be a handful of folks – not the entire Finance department.

Implementing this kind of automation requires determining the trigger, designing the action, and deploying the controls. First you figure out the use case to look for. Maybe the trigger comes from your IDS or SIEM. Or perhaps it comes from a threat operations platform aggregating and analyzing external data. Once you know the data to look for, you set an alert. Once it fires, what happens? You have to design a set of actions, which usually involves changing an active control. Once that is designed you deploy the change to a security control.

Returning to the importance of trust to any kind of automation, you should be able to quickly walk back any change you make in case of unintended consequences. Operations will push back against things which cannot be undone, so make sure your automation is designed with resilience in mind. Don’t make a series of changes if they cannot all be rolled back gracefully and quickly.

Easy, right? The concept is simple enough, especially for no-brainer use cases. A new set of orchestration and automation tools is emerging to provide a platform for this critical step. But as with most innovative security technology, this function is destined for eventual integration into broader security platforms.

With contextual threat intelligence making alerts more valuable, and then making changes that make sense, where do humans come into play? Not everything fits into a nice clean set of circumstances you can model to build tidy automation, so you need to structure operational process to be both flexible and yet consistent.

Structuring Activity

We understand process management is not nearly as sexy as hunting adversaries. But it’s more important. Security teams face an unprecedented lack of talent to implement and run security programs. So the secret to success is to rapidly improve the effectiveness of less sophisticated practitioners. That’s a politically correct way of saying you need to make n00bs a bit less n00by.

This entails looking at how a responder does things and implementing the process in a tool. Whether it’s an assembly line or a bank branch processing a deposit, mature businesses have documented processes which represent best practices. Start by learning how your best folks do their jobs. They have likely learned in the school of hard knocks, but they know what they are doing. Document how they work and use that as the basis for your process.

But you might not have rock stars on your team, or not know where to start. The good news is that many service organizations, as well as some product vendors, already have documented playbooks for many security processes and functions. As with automation, you start small with a fixed set of playbooks. The objective is to develop more, and make them more sophisticated, over time.

Look for opportunities to leverage other aspects of threat operations while implementing playbooks. For example you could develop a quick response process which starts with an alert from your endpoint security suite, enriched with information from your threat intel library, including a set of known malicious networks which can be automatically blocked by your outbound firewall. Then an analyst can dig into the compromised device to figure out exactly what happened, and if any other devices have been impacted.

On the bright side, a very specific set of activities structured this way can be very helpful to practitioners who aren’t quite sure what to do next. You already instrumented the platform with the ability to perform these functions, so they can follow along to benefit from the best practices research you have performed.

On the downside, you need to define which playbooks to implement, and they don’t help when you encounter something which they don’t cover. Fortunately innovation marches on, so soon you will be hearing about “cognitive security analytics”, which can make some connections automatically using very advanced analytics. It’s still quite early for cognitive technology, but it’s very promising, so that’s something to keep an eye on.

The first goal for deep thinking about threat operations is to stop reacting so tactically to just one attack at a time – instead focus on threats to your organization and see the bigger picture. Second, structure consistent operational processes to make less sophisticated practitioners more effective. We will bring these concepts together in our next post, with a scenario to put them into practice.

Contact

About

Securosis is an information security research and advisory firm dedicated to transparency, objectivity, and quality. We are totally obsessed with improving the practice of information security. Our job is to save you money and help you do your job better and faster by helping you cut through the noise and providing clear, actionable, pragmatic advice on securing your organization.