Cyber Triagehttps://www.cybertriage.com
Practical Endpoint ResponseThu, 27 Jul 2017 13:45:08 +0000en-UShourly1https://wordpress.org/?v=4.8.1Intro to IR Triage (Part 2): Analysis Categorieshttps://www.cybertriage.com/2017/intro-to-ir-triage-part-2-analysis-categories/
https://www.cybertriage.com/2017/intro-to-ir-triage-part-2-analysis-categories/#respondMon, 05 Jun 2017 02:08:52 +0000http://www.cybertriage.com/?p=732In the 2nd post in our Intro to IR Triage series, we’re going to take a big picture view. I want to give you the roadmap of how we are going to approach this series before diving into the technical details.

In the last post, we talked about the goal of triage and considerations to make when looking at tools. Now we are going to look at how we answer the triage questions about if a computer was compromised and if you care. The future posts will dive into the specific areas mentioned in this post.

What Are We Looking For?

Host triage wants to answer the investigative questions of “Is this computer compromised?” and “If so, how badly and have I seen it before?”. It’s not trying to get to root cause analysis.

So, how do you answer those questions? Well, it depends.

Evidence could be anywhere on the system. The bad guy (or girl) is trying to blend in so that he isn’t detected. To do this, he’ll use different and evolving techniques to gain and maintain access. He will also probably try to clean up his tracks and not leave evidence behind.

Example evidence may include malware starting from one of dozens of registry keys, malware that took over a normal process to make it do bad things, or there could be no malware and instead a password was stolen. The exact evidence is going to depend on who attacked you, what he was looking for, and your environment.

Types of Indicators

We all learn that to solve a big problem, then you should break it into smaller problems. So, I break the problem of answering “Is this computer compromised” into three smaller questions (that we’ll break down more later):

Are there malicious system changes (such as relaxed security settings)?

Are there malicious programs (such as malware)?

Is there suspicious user activity (such as sensitive files being accessed)?

Now, these categories have a bit of overlap because there could be remote user activity from malware or malware can make system changes, but I still think it is a useful breakdown to make it more manageable. If you have an alternative breakdown, let me know.

I’ll also note that these three categories will rely on the same data sources. For example, the Windows registry is going to be used to look for system configuration, malware persistence, and what programs a user ran. So, with these categories we aren’t going to think about registry (or file system) forensics. Instead, we are going to think about how we answer specific questions, such as “were malicious programs executed”.

As we’ll see as this series progresses, one of the challenges to answering these questions is knowing what is normal. We’re looking for suspicious and malicious things. To know that, you need to know what is good (or normal) on those systems. This is often a challenge for incident responders.

Let’s look at each of these medium-sized questions in a bit more detail so that we can talk about smaller-sized questions.

System Configuration Changes

Your default build may not be what the bad guy needs to get his work done. He may not like the fact that you have security software on the system and may not like that he can’t remotely log into the host.

Instead of installing malicious programs on the host, it is often better for the bad guy to use what is there (which may mean reconfiguring what is there). Some refer to this as “living off of the land”.

So, as part of the host triage, you should look for changes. We can break the question of “are there malicious system changes” into smaller questions of:

Were settings changed to reduce detection or attribution? Such as:

Disabled firewalls or antivirus programs

Decreased audit levels

Were settings changed to preserve their foothold? Such as:

Creation of local accounts with administrator privileges

Enabling of remote access

Enabling of file sharing to copy data to or from this system

Patching of vulnerable software so that someone else can’t get in

Disabling OS updates so that they can continue to exploit it

Where settings changed to prevent recovery? Such as:

Disabling and deleting of of shadow volume copy so that you can’t restore during a ransomware attack.

To investigate these changes, we’re going to need the registry and other configuration files. You can either manually poke around the control panel on the system or run some tools that collect the data. We’ll discuss that later in the series.

Malicious Programs

Malicious programs (or malware, viruses, APT, etc.) play a role in a lot of incidents because they allow the attacker to exploit the system, maintain access, and extract data from it. So, you should always look for it as part of host triage.

In this category, we are going to be looking for evidence of their current, future, or past execution. We’re going break the question of “Are there malicious programs on the system” into smaller questions of:

Are there malicious startup / autoruns programs?

Unless it is a one-time shot, malware is going to want to run on a periodic basis and therefore will have some trigger. We’ll need to look at the places that can start malware.

Are there malicious running processes?

Because so many programs look at startup locations, malware can be fairly tricky and it is sometimes easiest to see it when it is running. So, we’ll want to look at running processes to see if a legit process has been taken over by a malicious one. We’ll look at memory, files, network connections, etc.

Are there remnants of malicious processes?

It could be that the malware isn’t running when you are there and you don’t see how it could have started. But, you may be able to find evidence that it ran in the past. So, we’ll look for files, DNS cache, and execution history.

Looking for malicious program indicators means you need to look in a lot of places and includes a lot of volatile data. We’ll cover what we are looking for and how to find it later in the series.

User Activity

Lastly, we need to look at user activity. An employee or an attacker with compromised credentials can do a lot of damage without malware or changing the system configuration. They may already have all of the access that they need.

We are going to break the question of “Is there suspicious user activity?” into smaller questions of:

Is there suspicious login activity? Such as:

Users logging in at abnormal times

Users logging in from abnormal locations

Is the user doing suspicious things?Such as:

Users running abnormal programs

Users accessing folders they do not need to

Users creating encrypted ZIP files

One of the major challenges with looking at user activity is that a full record of user actions is not likely to be found. You may find registry keys that refer to programs that were run, but you may not have time stamps. You may have enabled logging to record when programs were launched, but that is not turned on by default.

Workflow

We’ve broken the bigger problem into three medium-sized problems and further broken them into smaller problems. But, then the question is where to start.

I’d recommend that you start with the medium-sized problem that you suspect the most. If you saw some suspicious network traffic to a Command and Control (C&C) server coming from the system, then I’d start with looking for malicious programs because those are what usually contact C&C servers. If you saw suspicious network traffic based on remote desktop, then start with system configuration and user activity.

A complete triage investigation should cover all of these categories, but you might as well start with the category that could be easiest to find the initial thread of evidence to start pulling on.

Conclusion

Host triage is a big problem that is best approached as a set of smaller problems. I look at it as three medium-sized problems with further smaller problems that can be tackled one-by-one. If you think I missed a smaller question, then let me know.

The next set of posts in this series will look at identifying suspicious user activity. Then we’ll do malware and system configuration. As we’ll see, one of the common challenges with these small problems is knowing what is normal.

The series is going to use Cyber Triage as an example tool (but the techniques and ideas are tool agnostic). If you want to try the eval version of Cyber Triage before then, fill out the form.

I often encounter companies who are starting to think more formally about incident response and how to properly deal with incidents. To help them with that process, I wanted to create a series of blog posts about responding to an endpoint and what to look for.

This is the first article in that series and it focuses on tools and things to consider when picking them. There are lots of tools out there, so I wanted to give some criteria that you should think about when evaluating them. At the end, we’ll look at a handful of tools and evaluate them against the criteria.

Host Triage

To start the series, let’s be clear on what I mean by host triage. I use that term to refer to the analysis of an endpoint or server to answer some specific investigative questions. Namely:

Is this computer compromised?

If so, how badly and have I seen it before?

You will often be asking these questions in response to a SIEM alert or some other indicator of strange behavior.

Host triage does not answer all investigative questions, but it covers what companies care about most. Many of them will use these triage questions to either wipe the system or call in outside experts.

To answer the triage questions, you’re going to need to collect data from the remote endpoint and analyze it. That sounds easy, but there are a lot of different ways to do it. Some companies are constantly collecting from the endpoint and others do it only after an alert. Some tools require lots of manual analysis to interpret the data and others do some of the work for you.

In the rest of this article, we are going to talk about considerations about picking the collection and analysis tools. In future articles, we’ll focus on the specific data that we are collecting and what to look for during the analysis.

My evaluation criteria break into three dimensions:

Evolution based on new threats

Based on your security team

Based on your IT environment

Let’s go over these criteria and look at some examples.

Evolution Based on Threats

Attackers are constantly changing their techniques and looking for different ways of getting in, staying in, and hiding. So, your triage techniques must evolve too.

Consider how your tools support this. If they are largely manual, then you may be responsible for bringing in new tools and techniques. If the tools are automated, then find out how they stay up to date and if you can customize them if you need to look for something that it doesn’t know about.

Security Team Considerations

Let’s first talk about your team. There is no point buying tools your team doesn’t have the experience or resources to use. As a physical world analogy, your office probably has some basic first aid kit supplies, but isn’t stocking scalpels, syringes, and other things that only EMTs, nurses, and doctors should be using. That’s because you’re going to call an ambulance if anything really bad happens.

Your cyber incident response capability should be similar. Have tools that your team can use and don’t bother with things that your outside help or expert teams will bring in.

Collaboration

If you have more than one person on your IR team, then it is important for your software to support collaboration. This prevents information and knowledge from being in only one person’s head.

Some host triage tools will produce only reports and it is up to you to distribute the results among your team. Other tools store the data in a central place so that multiple users can access. Others allow for real-time collaboration.

But, tools that provide collaboration will likely require more hardware resources to operate since they will probably require a server or other appliance. If you have only a single person team, consider looking at a solution that can scale and grow with your team.

Automation

When you take family pictures, do you manually choose the shutter speed or do you let the camera decide? The average person takes better pictures when the camera makes some of the decisions, but an expert photographer will can use the manual settings to make amazing photos.

Incident response tools are similar. An IR expert can do amazing things with manual investigation tools, but your average company won’t have an IR expert and should instead get help from automation.

An automated solution will perform a series of steps for you and will point out suspicious things that should be investigated further. 100% automation is not possible, but the tools can certainly help. This makes you faster and more efficient. Even if your team is experienced, automation helps them to make fewer mistakes and helps train new junior responders.

Investigation Depth

Incident response-based investigations tend to have two levels of depth:

Know how deep your team is expected to go. Many companies will hire outside consultants to do the deep dive, in which case you need to focus only on triage. Other companies want to be able to handle everything.

Either approach can work, but make sure your tool choice is consistent. Just like how your company’s first aid kit doesn’t have scalpels, your incident response kit may not need deep dive features.

Integration

If you have a SIEM or ticketing system, consider if it is important that it integrates with your triage tools. Possible integrations include:

Automatically start collection (or a more in-depth collection if you have continuous collection) based on an alert or ticket.

Feed the analysis results back into the SIEM or ticketing system so that other tools can leverage them.

Typically, the more the automated investigation solutions will offer integration options.

Preservation

It is often important for you to save the data that you collect during the triage.

For regulated industries, preserving the data can help in the case of audit if there were questions about the thoroughness of the investigation.

Future incidents could be related, so having as much information about previous incidents can help provide context.

This incident could be the tip of an iceberg and result in a large investigation with legal impacts. Having preserved the initial evidence could be important if the bad guy starts to clean up and delete evidence.

Some tools are only a graphical interface that runs on the target computer and do not produce preservable results. We recommend having a process that allows you to preserve the results, which may mean that you need to do extra steps to copy the results if the tools don’t do it for you.

IT Endpoint Considerations

The previous section was about tool buying considerations based on your security team. Now, let’s talk about considerations based on your IT infrastructure. I separate these because, in many companies, the security team does not have direct control over the endpoints. So, make sure your incident response tools are consistent with the culture of your organization.

Administrator Access

Run one or more collection tools that will require an administrator password to run.

Some companies don’t want to install agents because they can introduce security risks or there are concerns about impacting endpoint stability.

If the security team does not have administrator-level access to hosts and there are not agents running, then make sure that your collection solution is easy to use by the local IT person who will likely need to do the collection. In my experience, this means:

Easy to email or send to the local IT person

Easy for them to run

Easy to get data back to the security team either via file sharing or by automatically sending results over the network.

Remote Connectivity

If you have remote offices, then also take into account what connectivity you have between the security team computers and the remote hosts. Perhaps there are firewalls in the way that will prevent direct connections or maybe the network connections are not fast enough for large data transfers.

If connectivity is a challenge, consider asynchronous approaches whereby a local IT or security team member can collect data and then upload it via file share to a place that the main office can copy it from. While not as fast as a system that immediately sends data back to be analyzed, a file share approach can be more reliable.

Examples

Let’s look at some example WIndows-based tools to make these ideas more concrete. This is by no means a comprehensive list, but instead are some that highlight some of these concepts.

SysInternals Command Line Tools

One of the classic incident response tools are Sysinternals command line utilities from Microsoft. These tools provide many pieces of information that can be useful during a response. A typical use of these tools is to run 5 or 6 (or more) of them from a USB drive or network share, save the results to text files, and then manually review them for suspicious data.

Here is an example of running the ‘netstat’ tool to look at active connections:

Let’s review these tools based on some of the criteria from above.

Automated

No. These tools are not automated, though they can be foundation of some other automated solutions. If you directly run them, you need to remember command line arguments.

Updates

Because it is not automated, it is up to you to know which tools to run and what to look for.

Collaboration

No. These tools produce text file outputs and need to be fed into another system to support collaboration.

Depth

Triage. These tools do not go deep enough to allow for a root cause analysis of an endpoint.

Integration

No. Additional automation is needed to integrate these tools.

Preservation

Yes. The output of these tools can be preserved for future reference, but you you need to develop an archival system.

Administrator Access

Run Time. These tools will require you to have administrator access when you run them to collect all data.

Remote Connectivity

Because you will often need to copy several of the SysInternals tools to the endpoint, these tools are difficult to remotely run. But, if you do run them then they can work well in remote places because they copy small amounts of data.

SysInternals Graphical Interface Tools

Microsoft offers some of their SysInternals features as graphical interfaces. These were built for IT administrators, but can also be used for incident response. Though, they have their limitations. Namely that they don’t allow you to preserve all of the results and require you to manually interact with the remote system for the analysis (and a full investigation will require multiple tools).

Here is an example of running Process Explorer to look at active processes.

Automated

No. You’ll need to run several tools and need to interpret many of the results. Some levels of automation exist from uploading files to VirusTotal and verifying signatures.

Updates

Because it is not automated, it is up to you to know which tools to run and what to look for.

Collaboration

No. The primary output of these tools is a graphical interface, so it is not easy to share.

Depth

Triage. These tools do not go deep enough to allow for a root cause analysis of an endpoint. Similar to the command line tools.

Integration

None and it would be hard to automate so that they could be integrated.

Preservation

Partial. Some of the data displayed by these graphical tools can be saved to text files. But you you need to develop an archival system.

Administrator Access

Run Time. If you want to collect all information, you will need administrator access. If they are run without administrator credentials, they will show a smaller amount of data.

Remote Connectivity

Not really. These tools require user interaction and therefore remote connectivity would consist of making a remote desktop connection to the target system.

Cyber Triage

Our Cyber Triage tool is agentless and automates the collection and analysis of data. The collection tool can be remotely deployed and the results are sent back to a central database for analysis. The software analyzes the collected data for malware and suspicious data.

Here is a screen shot of Cyber Triage showing programs that were run on the system (from various registry keys and files) and flags certain files based on malware and heuristics.

Automated

Yes. Both collection and analysis are.

Updates

Malware scanning uses several engines that are updated (via OPSWAT) and heuristics are updated with each release.

Collaboration

Yes. The Team version allows multiple users to access the same data and see results. Users can also see how common or rare data is based on previous investigations.

Depth

Triage. This tool focuses on Triage, but the data can be integrated with our Autopsy forensics tool for a deep dive analysis.

Integration

Yes. The Team version has a REST API that can be called by SIEMS and ticketing systems.

Preservation

Yes. The data is stored in a central database.

Administrator Access

Run Time. These tools will require you to have administrator access when you run them to collect data. Local IT administrators can run the tools from a USB drive and send data back to the security team.

Remote Connectivity

The collection tool can be either sent to the remote host over the network (via PsExec) or can be run from a USB drive. The results can either be sent over the network or saved back to the USB. It was designed to work in a variety of environments.

Live Response Collection

The Live Response Collection from BriMor Labs automates the collection of data. On a Windows system, they wrap the previously described SysInternals command line tools (and other tools) to provide a more automated collection experience. They do not offer additional analytics on top of the collection though.

Automated

Partially. Collection is automated, but analysis is not.

Updates

Additional collection tools can be added with each release, but you will need to know what to look for.

Collaboration

No. These tools produce text file outputs and need to be fed into another system to support collaboration.

Depth

Triage. These tools do not typically go deep enough to allow for a root cause analysis of an endpoint.

Integration

No. Additional automation is needed to integrate these tools.

Preservation

Yes. The output of these tools can be preserved for future reference.

Administrator Access

Run Time. These tools will require you to have administrator access when you run them to collect data.

Remote Connectivity

This script runs a variety of command line tools and therefore it is actually a collection of tools. Because of this, these tools are difficult to remotely copy all of the files and then run. But, if you do run them then they can work well in remote places because they copy small amounts of data.

Kansa

There are several Powershell-based IR tools out there. Let’s look at one of them, Kansa. It is a framework that takes advantage of the Windows Powershell infrastructure to copy around scripts and collect data. It’s main focus is on automating the collection of data and also provides some analytics on the data to help with frequency analysis.

Automated

Partial. Collection is automated, analysis has some automation.

Updates

New plug-in modules can be contributed to keep up to date. But, you will need to know what to look for.

Collaboration

No.The tools produce output that must be added to another system to enable collaboration.

Depth

Triage. The existing modules focus on triage, but Powershell could be used to do a deep dive analysis as well.

Integration

No. Additional automation is needed to integrate these tools.

Preservation

Yes. The output of these tools can be preserved for future reference.

Administrator Access

Run Time. These tools will require you to have administrator access when you run them to collect data.

Remote Connectivity

The framework requires that Powershell be running in your environment so that the scripts can be copied to remote systems.

Volatility

There are several tools that focus just on memory forensics, so let’s talk about the most popular one, Volatility. Volatility is an analysis tool, which means that you need to use another tool to collect memory. But, many of the memory collection tools are fairly automated and require only a single command. Volatility allows the user to perform a lot of techniques to view processes and data in memory.

Automated

Partial. Some of the plug-ins will simply display data and others will highlight suspicous content.

Updates

There is an active community making plug-in modules, but you need to know which to install and what to look for.

Collaboration

No. The tools produce output that must be added to another system to enable collaboration.

Depth

Triage and Deep Dive. A subset of the modules can be used for a triage investigation, but others help answer other questions.

Integration

No. Additional automation is needed to integrate these tools.

Preservation

Yes. The output of these tools can be preserved for future reference and you have a memory image that can be used as evidence.

Administrator Access

None. The collection tools will require administrator access, but Volatility needs access to only the memory image.

Remote Connectivity

None. Volatility is not responsible for getting the memory image from the remote system to the analysis system.

General EDR

Endpoint Detection and Response (EDR) tools come in many forms and because they have “response” in their names I wanted to include them here. EDRs come in many forms and are rapidly changing so I wanted to highlight general EDR concepts in these previously described criteria.

Automated

Depends. Some of the tools have automated collection and analysis and others offer remote access to the system and you need to do manual analysis.

Updates

Depends on the automation of the solution.

Collaboration

Often. Many of the tools store results in a central location, which makes it easier to collaborate. But, not all tools do and not all offer methods for teammates to work together.

Depth

Depends. Some EDR tools are higher-level and collect the data needed for their detection, but do not go lower. Other tools allow the user to do a deep dive investigation.

Integration

Often. Many of these tools can integrate with other systems.

Preservation

Often. Many of these tools save data so that you can later retrieve it.

Administrator Access

Agents. Many of these tools are agent-based and therefore require administrator access when installing the agent.

Remote Connectivity

Yes. EDR tools have been designed to run on the remote target system. Not all EDR tools allow the user to interact with the remote endpoint though during an investigation.

Conclusion

When building an incident response capability, you need to take the needs of the IT and security teams into consideration. Many of the tools out there are fairly manual and are used by power users. If your company needs a basic host triage capability, then make sure it has the right levels of automation that you need. Are there other criteria that you look at? Let us know.

In future posts in this series, we’ll cover how to analyze the data collected by these tools. In the meantime, if you want to try our Cyber Triage tool, then get a free evaluation copy here.

]]>https://www.cybertriage.com/2017/intro-to-ir-triage-buyers-guide/feed/0Get Free Incident Response Softwarehttps://www.cybertriage.com/2017/get-free-incident-response/
https://www.cybertriage.com/2017/get-free-incident-response/#respondThu, 06 Apr 2017 02:54:14 +0000http://www.cybertriage.com/?p=683Organizations need to be able to respond to alerts and investigate their computers, but not every organization has an incident response budget or dedicated personnel. The newly released Cyber Triage Lite helps these companies by providing a free method for collecting and viewing data from endpoints.

This post covers what is included in Cyber Triage Lite and a brief comparison with some SysInternals tools

What’s Inside

The quick answer is that Lite version of Cyber Triage does not include the automated analytics that the normal version has. For example, it doesn’t include malware scanning, flagging of suspicious items, or incident-level grouping. The table below summarizes the differences:

Pivot between data types to understand the full scope of the incident.

X

X

Threat timeline to show when bad events occurred on this endpoint.

X

X

HTML Report that shows the bad and suspicious items.

X

X

Push collection tool to remote host as needed.

X

Group and correlate endpoints by incident.

X

Automated analysis to identify bad and suspicious items.

X

Correlates items with previous sessions to determine how common or rare it is and if it was previously marked as bad.

X

Blacklist to identify known bad items and whitelist to ignore known good items.

X

Typical Workflows

With these features, a typical company would use Cyber Triage Lite in the following way:

Run the collection tool from a USB drive on the endpoint and choose to either send data over the network or to the USB drive.

In the Cyber Triage interface, review the collected data and mark processes, ports, remote desktop connections, network shares, etc. as good or bad based on their knowledge of the target endpoint. Use the threat timeline as a reference to remember when other bad events occurred.

Generate a HTML report with the items that were marked as bad.

Comparison

Let’s look at an example of using some of the free SysInternals tools to respond to an incident versus the free Cyber Triage Lite.

Cyber Triage Scenario

Let’s assume we have an alert about an endpoint and want to investigate it. When using Cyber Triage Lite, we copy the UI-based collection tool to a USB drive and send the data to a security-team laptop running Cyber Triage. The UI-based collection tool is shown here (there is also a command line-interface version):

As the data is being sent back to Cyber Triage, we select the Processes tab and identify a suspicious MSUpdater.exe process based on its “AppData\Temp” path.

We want to know more, such as if it was automatically started at boot up or if a user may have started it. We go to the bottom of the window and choose “Startup Items” and see that it is started from a registry (we’ll be updating this table to list the specific key).

This now becomes much more suspicious because startup items aren’t typically in Temp folders. To see if it has network connections we change the bottom tab to go to Active Connections and see that it has an outbound connection.

We also have access to the file’s 8 NTFS time stamps, strings, and other metadata to help make a conclusion about the process and file. If we conclude that this process is bad, then we mark it as bad in the Cyber Triage UI.

When we mark it as bad, it will get added to the timeline and included in the final report, an example of which is given here:

SysInternals Scenario

Let’s look at the same scenario with SysInternals, which are a collection of command line and graphical interface tools from Microsoft. During an incident response, you would use several of the tools to collect the data you need.

If we wanted to repeat the above scenario, we have to choose between two approaches. We can run command line tools to collect data and then analyze it elsewhere or run the Process Explorer graphical program and interact with it on the target system.

Let’s first look at the command line approach. We can run ‘pslist’ to see the processes. In this view though, we do not know the full path to know that process MSUpdater.exe is suspicious.

But, let’s say that we do know that it is suspicious. If we want to see if it was automatically started, we need to then run the ‘autoruns’ program, which produces over 8,000 lines of text output, and search for the string “MSUpdate.exe”. We can see that it was started from the Run key.

If we want to profile the process more, we can run ‘netstat -ano’ and search its text output for the process Id of MSUpdater.exe to see if it has active network connections.

So, we can get similar data as we got with Cyber Triage, but it involves going to different tools with different command line arguments and searching the output for IDs and names. It requires the user to remember a lot of things.

Alternatively, we could launch the ProcessExplorer program on the target computer and do the analysis by interacting with the endpoint. When we run it, we see this:

From there, we can see the suspicious process MSUpdater.exe. From the process, we can see process details and see the active connections, but not if it was a startup item.

But, once we identify the process as being suspicious we have no way of recording that in the software or making a final report. This program also doesn’t show us anything about remote logins, DNS cache, network shares, and other data types that Cyber Triage collects and views.

In summary, while the SysInternals tools can be very useful, the command line tools take a lot of effort to match outputs from the various tools and the graphical interface tools do not have the same breadth or have reporting features.

Try it Out

Cyber Triage Lite gives companies a basic collection and review capability for free. If you want to use it, fill out the form and you’ll get a download link. For a limited time, you’ll also get 15-days of the paid features after you install it. If you want to see some data that you don’t think is being collected, add a comment here or send an email to support <AT> cybertriage <DOT> com.

]]>https://www.cybertriage.com/2017/get-free-incident-response/feed/0Cyber Triage Has a New Lookhttps://www.cybertriage.com/2017/cyber-triage-has-a-new-look/
https://www.cybertriage.com/2017/cyber-triage-has-a-new-look/#respondThu, 23 Mar 2017 03:40:27 +0000http://www.cybertriage.com/?p=673Cyber Triage 2.0 has been released with a new user interface and can be used for free (with a reduced feature set). The new UI allows you to make better decisions, and prevents missing evidence by automating the incident response triage workflow and giving more context about events. The free version allows companies to collect and view data from live systems, but does not include automated analytics.

This post covers how the new user interface makes it even easier for companies to perform a mini-forensic investigation of endpoints without needing agents to be deployed. We’ll cover the details of the free version in a future post.

New User Interface

As our users know, we are eager to listen to your requests, and add requested features as quickly as we can. This means that we periodically need to rethink the user interface to make it easier to find and use all that Cyber Triage has to offer. For the 2.0 release, we decided it was time for a new look.

The previous 1.7 release:

And the new release:

You’ll notice the following key differences:

All data types that we collect are now shown on the left-hand navigation and you can jump between the data types to focus on what is relevant for your investigation. In 1.7, all of this data was available, but it was scattered in different places.

Some of the data types are more fine grained to make it easier for you to focus on common sources of evidence. For example, there are now separate tables for remote logins, network shares, and DNS cache items because each is important for different reasons and involve different data. In 1.7, they were all in the same “Remote Hosts” table and you had to use filtering to focus on the different data types.

An entirely new timeline feature shows dates associated with bad events and can be used as a quick reference when looking at other suspicious items. For example, imagine one of the malware scanners marked a startup program as bad but 15 others didn’t. You aren’t sure if it is a false positive or new malware that isn’t being widely detected. You can look at the 8 NTFS times on the file and compare them to the mini-timeline to see if any are close to the dates of other known bad events.

In short, the new UI makes it easier for users to quickly find the most relevant data and make informed decisions.

More Automated Workflow

Not only does the new UI make it easier to find data, it also makes it easier for the user to understand the Cyber Triage workflow. Automating the workflow reduces training and errors. The workflow is as follows:

Cyber Triage begins automated analysis in the background as data is collected and the various techniques will classify the item as:

Bad: The item matches evidence from a previous incident or an analysis technique with low false positive rates flags the item.

Suspicious: The item is similar to evidence from previous incidents.

Good: The item was found in the NIST NSRL or a user-created whitelist.

Unknown: None of the above are true.

You review the items marked as “Bad” to verify they are not false positives. You can find them in the “Bad Items” left-hand menu item.

You use the menus to locate the suspicious items

For each item, you can mark it as Good or Bad based on what you know about the system and its users.

After reviewing the suspicious items, you can then review the other “unknown” items if you are still looking for evidence. This step depends on how deep your investigation is and what evidence you have already found.

You can generate a final HTML report that outlines the findings along with CybOX objects.

As you use Cyber Triage and mark the Suspicious items as Good or Bad, then the backend database records this and will help to make future decisions easier.

Try It Yourself

The new Cyber Triage interface makes it easier to find evidence and train your security team to use. It automates the workflow so that you don’t miss steps and gives you more context so that you can make better decisions.

If you want to try to Cyber Triage 2.0, fill out our form and we’ll send you an evaluation copy.

]]>https://www.cybertriage.com/2017/cyber-triage-has-a-new-look/feed/0Exposing More Data to Save Timehttps://www.cybertriage.com/2016/exposing-more-data-to-save-time/
https://www.cybertriage.com/2016/exposing-more-data-to-save-time/#respondThu, 15 Dec 2016 14:48:21 +0000http://www.cybertriage.com/?p=604 The new Cyber Triage release allows you to better understand the impact of a threat. Now, you can automatically see what registry keys reference a file with malware, what processes are using the file, and remote hosts with active connections to those processes.

This new feature in 1.7 allows incident responders to more easily pivot in the data that Cyber Triage collects with its agentless system. This gives a faster and more thorough response than using ad-hoc techniques.

Context is King

When responding to an endpoint, you often want to confirm an incident and determine how bad it is so that you can quickly make some important remediation decisions. To determine how bad something is, you often need to look at what data was accessed and what other hosts and user accounts were involved. You can now do this in Cyber Triage with a new set of tabs at the bottom of the UI. This gives you information about items that are related to the item you have selected.

The new Cyber Triage feature makes it easier to find related data. Let’s look at two examples.

Malware File

Let’s say that Cyber Triage flags a file as being malware based on data from OPSWAT Metadefender. You want to know more about this file on this system.

First, let’s see if had a persistence mechanism on the system to startup every time or as part of a scheduled task. We can see in the new interface that there is a registry key that causes it to start each time.

We can also see if the file was associated with a currently running process and, if so, what network connectivity it had. We can see in this example that there was a running process and it had an active network connection to an external address on port 80.

We can also use the User panel to see more information about the user account associated with the file. What was their login patterns and do they have administrator privileges?

We can also see the 8 NTFS timestamps to see when the file was created on the system to know how long the system had been infected.

With this new feature, we were able to go from a single executable and quickly determine if it was running, how it was started, and what hosts it communicated with. We can now block network traffic to that address and use the dates as a starting point for further investigation.

Suspicious Startup Item

In another example, Cyber Triage identifies a startup item as suspicious because it is in a non-standard path, for example it could be in a temp folder. However, the malware scanning does not identify it as known malware. It could be a new type of malware that is not yet flagged, so some manual review seems appropriate if you have the skills to do so.

First, we pivot to the file itself in the new interface and look at the 8 NTFS timestamps to look for evidence of time stomping (changing the times after the file was created) and to see how long ago the file was created. In this case, we can see that the dates in the Standard Information attributes are before the dates in the File Name attribute, which typically indicates time stomping. The exclamation marks in the Cyber Triage UI show us this. This file now becomes more suspicious.

We can also go to the file content itself and see if it looks suspicious. Many types of malware are packed and therefore have few visible strings in them. We can look at the entropy value that Cyber Triage tells us about the file to see that it is highly compressed and the interface suggests that it is packed. This file becomes more suspicious.

With this new user interface, we were able to quickly review some other indicators of the file and then decide to perhaps upload the file to VirusTotal or some other malware scanning system for a 2nd opinion.

There is More Coming

There are a series of releases planned that focus on interface improvements to make responses faster. We are updating the interface to automate the workflow more and give more context about when events in the incident occurred.

If you would like an evaluation copy of Cyber Triage, please contact us.

]]>https://www.cybertriage.com/2016/exposing-more-data-to-save-time/feed/0Finding Suspicious Program Activityhttps://www.cybertriage.com/2016/finding-suspicious-program-activity/
https://www.cybertriage.com/2016/finding-suspicious-program-activity/#respondWed, 14 Sep 2016 15:41:59 +0000http://www.cybertriage.com/?p=492The 1.6.1 release of Cyber Triage added a new automated analysis technique to make the life of an incident responder easier and more efficient. The new technique focuses on the programs that were run on the target system.

The motivation for analyzing these programs is to identify malware and tools that could have collected data from the system. With the increase of attacks that involve compromised credentials (versus malware), this information can be critical. The incident responder’s challenge is that hundreds of programs are run on computers and manually reviewing them is time consuming and error prone.

With the 1.6.1 release, Cyber Triage performs both fully and partially automated analysis on the programs that were run. Full automation comes from malware analysis using the OPSWAT Metadefender service. The new partially automated techniques help the responder identify the suspicious programs and ignore the rest.

Partial versus Full Automation

First, let’s define what we mean by fully versus partially automated. I call something fully automated when it requires no human interaction and partially automated when some human interaction is required. See my previous blog post on incident response automation for more details.

Fully automated is great from a time and resource perspective, but it is more likely to generate false positives unless the software knows everything (which is hard when responding to a computer that was not being monitored). Partially automated puts some pressure on the user to make some decisions, but hopefully the software is doing most of the analysis and the user is just approving the results.

We use both fully and partially automated techniques when analyzing what programs were run.

Finding Suspicious Programs That Were Run

Cyber Triage collects information about what programs were run by analyzing Prefetch, registry keys (using our open source rejistry++ C++ port of Willi Balenthin’s rejistry library), and other places. We then collect the executables using The Sleuth Kit to copy them even if they are locked (and not modify access times).

In previous versions of Cyber Triage, we applied an automated analysis technique on these programs to flag them if they were run out of an AppData or Temp folder (because malware often runs out of those places and we wanted to draw your attention to them). This worked great on many systems, but other systems had lots of false positives. In some rare cases, hundreds of them. In general though, it was only a few programs that were causing the false positives.

Many of the items had similar names and many of the files were deleted (so we couldn’t apply malware analysis techniques to the executable). For example, one system had over 5 entries in AppCompatCache for g2mui.exe being run, such as:

Of the five files mentioned in the registry, only three of the corresponding files still existed. The files that still existed could be run through OPSWAT and considered safe if none of the 40+ engines thought it was malware, but what do we do with the other two entries?

These are from Windows Defender running. In this case, none of them had file content anymore. We needed a solution so that users could quickly review these programs and focus on the relevant items (which may have only 1 entry).

Helping the User

The latest Cyber Triage release continues to run the EXEs through OPSWAT and it also presents the first responder with suspicious programs to review. The interface allows the user to filter the types of programs and groups them based on similar names. The backend database provides information on other computers that also had that program. These features allow the responder to quickly get a high-level perspective.

For example, the previous examples about g2mui.exe and dismhost.exe are grouped so that the responder can see that 3 of the 5 g2mui.exe files came back clean from malware analysis and that none of the dismhost.exe files existed. Decisions can be made faster because they are grouped together.

You can also use the filters to focus only on executables that were not deleted, which are obviously easier to draw conclusions about. These changes make it much easier and faster to review what programs were run on a suspected system during a triage.

Better Review Your Programs

Get a free eval copy of Cyber Triage so that you can more efficiently review programs and user activity in your environment. Our agentless approach makes it easy to try Cyber Triage and deploy it in your environment. Contact us to get a free evaluation copy.

]]>https://www.cybertriage.com/2016/finding-suspicious-program-activity/feed/0Dig Deeper: Find More IOCs and Fast Flux Domainshttps://www.cybertriage.com/2016/dig-deeper-find-more-iocs-and-fast-flux-domains/
https://www.cybertriage.com/2016/dig-deeper-find-more-iocs-and-fast-flux-domains/#respondThu, 21 Jul 2016 00:26:17 +0000http://www.cybertriage.com/?p=482Find more evidence on an endpoint with the latest Cyber Triage release. Last week’s 1.6.0 release expands on Cyber Triage’s thoroughness and ease of use. We’ll talk about two new analysis techniques in this post: collecting all file metadata and detecting fast flux domains. Both of these allow any company to perform a more in-depth triage than you’d get with ad-hoc approaches.

It’s All About the Metadata

When using the latest Cyber Triage, you’ll be able to find more indicators of compromise (IOCs) because blacklist rules are now applied to all files on the target system. This allows you to better use threat intelligence to triage systems.

Cyber Triage has always collected content and metadata about files that were referenced in the registry, startup folders, and scheduled tasks. Now, it collects metadata (name, times, size, etc.) about all files as it scans the file system looking for suspicious files.

What this means for you is that if you have threat intelligence about a suspicious path, such as “C:\Temp\BadStuff” and you add that path as a blacklist item, then Cyber Triage will always flag it. When you get updated intelligence about paths, then you can search previous collections for them because the collected data is always stored in the backend database.

The Cyber Triage collection tool uses The Sleuth Kit to analyze file systems and therefore we can pull all 8 NTFS timestamps and not modify the access times as we traverse the file system. In a future release, we’ll use these time stamps to make timelines so that you have context about what happened before and after each threat.

Another use of this additional metadata is to help a responder know what normal is. Cyber Triage uses the backend database to show you what other hosts have the same file, which helps the responder know how rare or common a file is in the enterprise.

Fast Flux Domains

To avoid detection, malware and malicious programs use a technique called fast flux to quickly change what IP address a domain name maps to when they reach out to their command and control servers. This approach makes it more difficult for companies to block known bad IP addresses at the network level.

When investigating an endpoint, you want to look for dynamic domains that the endpoint connected to because it could be a sign that malware is on the host. You’d find references to these domains in the DNS cache or maybe web browser history. The problem is that there are 10s of thousands of dynamic DNS domains. FreeDNS, for example, has over 86,000 domains that a user can pick from.

Cyber Triage now identifies the dynamic domains for you when it displays the hostnames found from DNS cache, active connections, registry, and event log. This makes it easier for you to quickly identify if the endpoint was connecting to a dynamic DNS domain.

Cyber Triage identifies the dynamic domains based on name servers. For example, all of the 86,000 domains from FreeDNS use NS1.AFRAID.ORG as their name server. Cyber Triage comes with a list of common dynamic DNS providers and users can add their own services to it

More is Better

To properly triage an endpoint, you need to quickly look in the places where evidence could be. The latest Cyber Triage release lets you look at more places for evidence while not taking more time. Contact us to get a Cyber Triage demo and free evaluation copy.

Many companies want to improve their incident response capabilities and make them more effecient. Automation is often touted as way to improve the response times, but what does automation (or orchestration) mean in DFIR? Can the entire process be automated? Do we want it to be?

To answer those questions, we need to think about incident response differently and this post is the first in a series that dives into what can be automated in DFIR and how to prioritize their implementation.

This content came out of a talk that I’m giving at the 2016 SANS DFIR summit. While I love the idea of short 30-minute talks (which we also do at OSDFCon), I realized that there was way too much content to cover in that short period. So, these blog postings will have the more complete discussion.

In this posting, we’re going to talk about why we’d automate, how other industries think about automation, and a framework for thinking about automation in IR. It should help you to start thinking about your process and where you should be focusing on automating it.

Why Automate DFIR?

Automation can, in theory, help common issues that security teams have:

The time between alerting and remediation is too long.

The alert backlog is too big.

The responder doesn’t always know what is normal or anomalous on each host.

Bored of doing the same thing every day.

No budget to hire more people.

Automation can make the response faster, which means you can get through the backlog faster. Automated systems can store lots of data, which helps to know what is normal. At the end of the series, we’ll review each of these to see if the dreams come true or if automation just moves the problems.

Previous Work

There is a lot of previous work on the topic of automation, so we should have some awareness of that before we dive into IR-specific topics. Previous work ranges from manufacturing to flying an aircraft. There is a fantastic blog post from John Allspaw at Etsy about the basic concepts of automation. He is covering it from the perspective of automating IT and web sites, but the basics apply to IR as well. It’s a must read if you care about this topic. You can think of this section as the parts of his blog that I think are most relevant to automating IR.

What is Automation?

If you search around for a definition of automation, you’ll get a lot of things that are hard to directly apply to incident response. In Allspaw’s article, he lists some fairly academic definitions, such as:

“Automation is defined as the technology concerned with the application of complex mechanical, electronic, and computer based systems in the operations and control of production.” – Raouf (1988)”

While I understand the intent of this and others, they were hard to map to incident response techniques. So, I made a more simple (and less rigorous) definition:

Automation is when the computer does the next step without human intervention.

Levels of Automation

Automation can occur at various levels. Allspaw mentions an article by Sheridan and Verplank that lists 10 levels of automation with various approaches of human interaction.

While that many levels are great for an advanced discussion of automation and for evaluating specific implementations of automation, I’m using three levels for the basic discussion in this series:

Manual: No computer assistance

Partially automated: Some human interaction

Fully automated: No human interaction

Trust

There are plenty of examples where automation has done the wrong thing. Allspaw gives the example of autocorrect on our phones and the mistakes it makes. I’m sure you have your own bad experiences.

Because of these bad experiences, it is important for people to trust the automated system to know it is going to do the right thing. The Allspaw blog provides suggestions from Lee and See to help people trust the software. Here are I think the most relevant items for IR:

Show the past performance of the automation.

Show the process and algorithms of the automation by revealing intermediate results in a way that is comprehensible to the operators.

Simplify the algorithms and operation of the automation to make it more understandable.

Train operators regarding its expected reliability, the mechanisms governing its behavior, and its intended use.

I can attest to the need to be transparent with the automation from our work with Cyber Triage. We automate collection and analysis of endpoint-based data and our initial releases showed very little about what steps were going on behind the scenes because we didn’t want to complicate things for the user. However, people wanted to know what we were doing so that they could trust that we were doing at least all of the same things that they would do. So, we now show much more of that.

The main takeaway from this list is to think about which of these that your IR automation solution provides.

Risks of Automation

Allspaw talks about some well known “ironies of automation” from Lisanne Bainbridge, such as wanting to automate things because humans aren’t reliable, yet we can’t automate complex things and end up depending on humans for the most complex things (but yet they supposedly aren’t reliable even for simple things…).

The part that is most relevant for the IR discussion is this from James Reason:

“Skills need to be practiced continuously in order to preserve them. Yet an automatic system that fails only very occasionally denies the human operator the opportunity to practice the skills that will be called upon in an emergency. “

With the impact being that when your DFIR automation fails in the face of a new advanced threat, then you need to make sure that you have someone who can do the response work manually.

When Should You Automate?

When thinking about automating IR, we need to decide at each step what the appropriate automation level is. I couldn’t find a concise approach to making this decision, so I made up my own.

You should choose the highest level of automation (fully, partially, or manual) where the benefit of automatically performing the next step (instead of doing it manually) is greater than:

The impact and likelihood of a mistake (wrong next step is performed)

The cost of implementing and maintaining the automation (the cost is often directly associated with the complexity of the solution).

The obvious challenge with this approach is that it is hard to have a metric that you can quantitatively compare the benefit of the automation with its expense and risk of errors. So, it’s more subjective than objective. But, those are the three criteria to consider when thinking about the level of automation to chose.

Automation in Incident Response

Now that we have some of the basics, let’s get back to incident response. First, let’s talk about how we think of incident response. Most people think of it as a process with basically these steps:

Identification

Containment

Investigation

Eradication

Recovery

While these are great steps for thinking about the process, I don’t think they are as useful for thinking about the technology involved because the phases have a lot of technology overlap.

Mitigation: Taking actions to reduce further damage (the containment, eradication, and recovery phases)

Let’s dive into each of them in a bit more, but we’ll save the details for the follow on posts.

Investigation Activities

When responding to an incident, there are often many questions to answer at different times and the investigation work answers them. Here are some common questions:

Triage: Is the computer compromised? How badly?

Deep Dive / Forensics: Who did it, when did it happen, what were they going after?

Hunting / Scoping: Which other computers have this file?

The investigation process can be broken down into three general steps:

Data Collection: Get some data to analyze

Data Analysis: Analyze the data and get some results

Inference: Answer the questions based on the results

In the next posting, we’ll break these three steps down further to really evaluate where we should automate, but a fully automated investigation process would look something like this:

Computer identifies what questions need to be answered based on the incident type.

Computer knows what data types (such as volatile data, registry keys, etc.) are needed to answer the question and what computers have the needed data.

Computer analyzes the collected data to answer the questions, with techniques that range from comparing to IOCs to looking for user behavior that is anomalous for their role.

Computer continues to collect and analyze data until it can make a conclusion about the answer based on the analysis results.

I’d propose that this is currently possible for a very limited set of questions and scenarios. For example, the question “does this file exist on other computers” can be fully automated. Other questions though fit into the partially automated category. Triage questions are easier to automate than deep dive questions because they are better understood. We’ll dive into this in the next posting though about automating the collection and analysis.

Mitigation Activities

When we aren’t investigating during a response, we’re making mitigation-based changes to reduce damage and risk. For example, a host could be removed from the network, a user account could be disabled, or the system could be wiped.

We can break mitigation into two steps for the topic of automation:

Picking the mitigation approach: Use knowledge of the attack and the corporate network to decide what changes to make to best reduce the risk.

Implementing it: Make the changes.

For the topic of automation, the idea is that the software could automatically decide what changes to make for a given incident and could automatically carry them out.

The obvious risk with automating mitigation is that mistakes can be expensive. If you incorrectly collect too much data, it wastes time, but that is not as bad as shutting off network access to some key people in the company by accident.

Like with investigation work, the practical solution for many companies is to have partial automation for mitigation. For example, let the human approve most decisions before it happens except for well known situations where it can be automatically done. We’ll cover this more in a future posting.

Conclusion

Automation is needed to quickly resolve incidents, but we need to critically think about where to apply the automation. We need to understand what types of questions can be automatically answered and what types of mitigations can be automatically applied. In the next article, we’ll break the investigation work into smaller steps to evaluate which can and should be automated and which are better to stay manual given the current technology.

]]>https://www.cybertriage.com/2016/automating-incident-response-setting-the-stage/feed/0Maturing towards Team-Based Incident Responsehttps://www.cybertriage.com/2016/maturing-towards-team-based-incident-response/
https://www.cybertriage.com/2016/maturing-towards-team-based-incident-response/#respondFri, 11 Mar 2016 16:08:02 +0000http://www.cybertriage.com/?p=417In our last blog post, we talked about how, as an organization’s security posture matures (often along with the organization itself), its strategy starts to move beyond prevention to focus on detection and response. In general, the larger or more valuable the company, the more security incidents it must respond to, and the more complex those incidents can be.

In turn, effective incident response demands more responders with a wider spectrum of skills, who can work together to identify and scope incidents. To make the new IR team as efficient as possible, more than one person needs to access the others’ work at any given time.

Filling communication gaps

Each incident offers an opportunity for a responder to build knowledge. What’s normal for their network? How does this change over time? What data or behavior can the responder consider suspicious within these parameters?

Responders working as part of a team, whether as part of an internal security operations center (SOC) or a consulting team, have to be able to communicate these pieces of information in order to make effective decisions, and to provide effective counsel to key stakeholders.

Because junior responders are still building those types of skills, they’re in an ideal position to perform basic triage and other tasks. However, senior responders need a way to review their work to ensure they didn’t miss anything, are effectively prioritizing systems, and are otherwise sticking to the response plan.

Team visibility streamlines incident response

Just as an individual responder needs a combination of automation and human review to avoid missing critical indicators, team responders need full visibility into what each person has seen and is seeing simultaneously. This helps to identify any gaps as well as the incident’s scope, and improves informed decision-making around which systems to prioritize for containment and remediation.

Cyber Triage 1.5 offers a server-based version that allows multiple clients to leverage the same database and multiple simultaneous collections. While individual responders can start and pause work according to their own workflow and schedule needs, any data they collect is aggregated and accessible to other responders at any time.

The server storing the data is local to the organization’s own network, so there’s no risk of leakage of sensitive data. Responders can tag their own and one another’s work, improving correlation results.

To see Cyber Triage 1.5 in action, visit our exhibit at the SANS Threat Hunting & Incident Response Summit April 11-12 in New Orleans, LA, or contact us for a free demo.

]]>https://www.cybertriage.com/2016/maturing-towards-team-based-incident-response/feed/0Make Better Use of IDS Alerts for Incident Responsehttps://www.cybertriage.com/2016/make-better-use-of-ids-alerts-for-incident-response/
https://www.cybertriage.com/2016/make-better-use-of-ids-alerts-for-incident-response/#respondThu, 18 Feb 2016 14:16:13 +0000http://www.cybertriage.com/?p=317If your organization’s security posture is maturing beyond prevention and beginning to focus on detection, you may find yourself evaluating a host of new security technologies.

Among the most attractive for many organizations are network intrusion detection systems (NIDS) or intrusion prevention systems (IPS). These “starter” platforms are easier to roll out, because they don’t involve configuring each endpoint; they require no training, rendering them transparent to end users; and you don’t need to test them on lots of platforms.

You’ve got alerts. Now what?

When you first start out with your NIDS or IPS, the influx of alerts where there were none before can be confusing. . Without a systematic approach, you may find yourself unsure of what to do with all these alerts.

However, it’s important to find a way to examine each and every incoming alert. Even if you aren’t part of a regulated industry — where compliance and audit rules may force you to look into each alert — the inability to clearly delineate false positives from real threats is too great a risk to take.

How so? NIDS alerts about a host system’s communication to a blacklisted IP address don’t differentiate well, so an outsider’s effort to steal Facebook credentials won’t look any different from an effort to steal admin credentials.

Therefore, while you may investigate an alert one week — only to find that it’s a “tip of the ice cube,” random, small-scale adware infection that doesn’t require a lot of time and effort to contain and remediate — the alert you investigate next week could be a “tip of the iceberg,” widespread compromise of your network, which you were lucky enough to happen upon.

In short, when a NIDS flags command-and-control traffic, it isn’t giving you the whole story. How can you contextualize the parts of the story you are getting?

Put alerts in perspective with triage

A NIDS alert in itself doesn’t need to trigger a full-scale incident response. However, there are basic incident response practices you can put into play yourself to improve your context and help you make better decisions. These include:

Review the system for malware and system configuration changes. Common locations for hidden malware include startup programs, running processes, and scheduled tasks.

Review user activity. Logins, remote desktop activity, network shares, and programs that have recently run can all provide important indicators if the account credentials were compromised and the attackers have been using it to steal information.

These activities provide the needed visibility into the endpoint. When they are automated, they can be performed with little user interaction. This way, you need only spend time reviewing the data to make a determination. This saves time and allows you to get through alerts more quickly, keeping your alert backlog smaller and more manageable.

An automated, easy-to-use process also reduces the need for a “deep dive” incident responder or forensic expert, saving additional time and money at a time when you may not even be sure you need that expert.

Cyber Triage automates these and other triage processes so that anyone who is responsible for the initial stages of incident response can assess alerts quickly and use the information to decide what comes next. Contact us for a demo of how you can put it to work for your organization.