From the Field: Sprinting Through An "APT" Casefile - Part II

In Part I , I sprinted through an APT case file detailing a sophisticated and targeted attack on Company's executive management. In this article, I'll discuss why the attacker was successful, and what Company might have done to protect itself better.

You're probably expecting this discussion to start with the executive that opened the malicious PDF containing an Adobe zero-day attack. That's not where this story begins.

DISCOVERIES THROUGH THE POST MORTEM PROCESS

Fast-forward to the tail-side of the investigation - the post-morem process.

Through the post-mortem process of the investigation, the security team went back several months through Intrusion Detection (IDS) and other security system logs, and determined that the attackers had been probing Company's network resources for months. Most of the activity was basic network enumeration, and then the use of common testing frameworks like Metasploit and Nessus to probe for vulnerable targets. Note: There was no evidence that the attacker managed to compromise any machines remotely (this may have pushed the attacker's hand to try additional tools from their APT toolkit). When queried about how this activity went unnoticed, the security team investigated and found several startling holes in their security program.

The first hole was that they had no mechanisms in place to detect and correlate "slow drip" attacks. To put it simply, only barrages of attacks within a smaller timeframe tripped IDS alarms. Since the attacker probed very slowly and methodically, their recon activities were not seen.

The second hole was that there was very little manual inspection and correlation done using log analysis tools like a Security Incident and Event Management (SIEM) application or Splunk - which has become my de-facto tool for log analysis.

The third hole was discovered in an unlikely location - help desk service tickets. Company didn't flag service tickets for their executives as "persons of interest", and no periodic review of these tickets was performed. The security team manually reviewed service tickets for the entire executive staff over a three month period and discovered numerous attempts at socially engineering credentials from service desk staff. These attempts were never reported because of an inconsistency in how the service desk staff were trained. Full time service desk employees received a very in-depth security program - contract service desk employees did not. Company's service desk was 70% contracted staff. The odds were in the attacker's favor. However, because Company required their employees to select at least one password reset question that only they knew the answer to, these attacks were thwarted. Chalk one up for Company.

With these findings discussed, we can look more in-depth at a step-by-step level on how the attacker gained a foothold on the CFO's laptop, and how they managed to conduct their attack over such a long time period without being noticed.

DETECTING THE UNDETECTABLE

We have established that Company's perimeter watchdog services were a little lacking, but no direct system compromises were detected prior to the CFO opening the dangerous PDF document. We've also established that social engineering attacks against the service desk were thwarted, yet not reported. The battle now moves to the target: the executive and their laptop.

The attacker spent some time prior to the direct attack on the CFO analyzing Company's press releases, the style of writing, and locating appropriate contact names at Company that might e-mail executives on a routine basis. The end product of this homework was a finely crafted e-mail to the executive staff from a trusted individual with a malicious PDF document attached. This was no ordinary malicious PDF document. This document was carefully crafted to exploit a vulnerability in Adobe Reader that was unknown to the vendor or security software vendors - it was only known to the "underground". That is, people that find vulnerabilities in common systems, write exploits, and then sell them on numerous websites around the Internet. Company's technical controls were correctly configured, but failed to detect this attack because it was unknown to the security software. Company might as well have not been running any security software at all.

The attacker sent the malicious attachment in an e-mail to the entire executive staff hoping that at least one of the recipient would read the e-mail and open the corresponding attachment. The CFO took the bait, read the email, and opened the attachment. At this point a number of things occurred:

The exploit was successful, and executed embedded code within the document that installed a malicious application on the laptop.

The application downloaded a toolkit from the Internet, executed it, and provided an encrypted remote shell for the attacker to connect to.

The attacker connected to the laptop through the remote shell.

Attacker sets up a Windows Scheduler job to run hash dumping tool - dumps output to a file. This runs as the user SYSTEM.

Output is analyzed, and the attacker locates users with admin privileges

Attacker attempts to delete hash tool and output, but can't get their secure delete application to work properly. Uses 'del' command, which allowed these files to be recovered forensically.

Windows Scheduler jobs were recorded in system security log

Attacker uses net.exe to start to connect to other systems, but doesn't spend much time on them to avoid generating IDS or HIDS alerts

Attacker quietly disappears, and then reappears several times over the next few months to grab more mail from the CFO's account

WE'VE BEEN PWN3D

Back at the post-mortem meeting, the 13 events above are looked at and discussed in great depth. What were the take-aways from these discussions?

Protecting users from zero-day attacks is hard. Technical tools are not enough. These attacks are targeted, so we need to educate the targets better on the methods of the attackers, and give them a clear communication process if they think they are being attacked, or have fallen victim to an attacker. Frequent awareness training and communication for all staff is mandatory, period.

Company was using an outdated web proxy with very little filtering rules in place. It was determined that upgrading the proxy environment and enforcing better content filtering rules was a must. All logs would be fed into log analysis and correlation software.

Company had deployed Host Intrusion Prevention Software (HIPS) to all laptops, but had it running in detect-only mode. These alerts were configured to report back to the HIPS server. Unfortunately, nobody was watching the logs on the HIPS server. These logs would now be fed into log analysis and correlation software.

Windows Events (WMI) were not being monitored adequately. Windows Scheduler logs were completely ignored. These were reconfigured to report to log analysis and correlation software.

Security staff training was insufficient. Consulting experts were brought in to educate staff, implement better processes, and help with the technical integration pieces.

Service desk training would incorporate several hours for full time and contracted staff on social engineering and the proper reporting procedure for suspicious activities.

An immediate investment was made into log analysis and correlation software, including training.

Company's CIRT process was re-designed to incorporate the increase in the number of reporting entities.

JUST ANOTHER CASE

I could say this was just another case, but the truth of the matter is that these cases are becoming all too frequent and similar in nature. Attacker have found that by attacking a victim from multiple fronts, with a variety of tools, and by being patient, the payoffs are astounding.

This is purely my opinion, but these attacks are successful because the victims:

Have an unbalanced security program that doesn't incorporate technology, people and underpinning processes

Implement controls poorly

Don't test their controls themselves to validate third party testing

Don't have third parties test their controls to validate their own testing

Deploy tanks and bombs at the front door, and leave the windows and back door wide open

Don't monitor the activities of trusted users

Don't properly classify their assets, including their people. They don't know who are the most targeted people in their organization.

Security strategist isn't technical enough to understand the problem, or recommend valid solutions - or, is solely focused on the technical controls and can't integrate the people and process components due to constraints in the organization

Don't understand n-tier architecture, don't understand zones of trust, and don't support those that do understand it.

Have poor risk management and assessment skills overall.

There are many others of course. :-)

I could say "prepare yourself! There be dragons here! Beware the APT!", but I'm not going to. I'm going to recommend that you get back to the basics of information security, and continue to fight the good fight. Just be aware that targeted attacks are occurring at a frequency that I haven't seen in years, and everyone is a potential target.

I have been seeing stories like this popping up more frequently lately. As I have just recently "truly" entered the security world (working on that good 'ole CEH right now), there have been some real eye openers for me.

The problem I have is that when I start talking about "training" and "teaching" etc. I see the eyes glaze over and everyone is done listening. They just see it as IT wanting to spend more of there hard earned money. It is a sad realization when you are fighting the fight and no one is willing to help you fight the fight.

Chief, have you ever had to talk someone off the ledge about using the term APT? :) Do you get many clients who inflate an attacker to cover their own butts, whether overtly or subconsciously?

Day 1: This is a great illustration of the sad state of "detection" in so many orgs. Something weird is happening, and something weird is seen in the logs, by sheer luck.

There's so many ways a curious analyst could find something like w.exe being executed or mail header anomalies, or any scheduled tasks at all. Heck, even in this case automated tools should be able to send out cries for help (poison ivy!). Or, I don't know, the CFO telling someone about unsolicited attachments? (I admit, I imagine almost every exec is way too busy to bother, since security will just steal away more time....)

@Lonervamp: Yes, APT is the default go-to term any time people are attacked nowadays. And yes, there are some folks out there that scream "the sky is falling" to get an investigator/responder in the door - only to find us asking them a lot of uncomfortable questions. Questions like "You weren't hacked - someone fat-fingered a maintenance script. Do you want to tell your boss or do we?" :-)

Very accurate according to what I have seen in the past on my APT cases except for the fact that I am very skeptical that the mail came from servers in Russia and exfilled to servers in Brazil. It almost 80 percent of the time comes from servers in the US if the US is victim.

Another great case from the Security Monkey, but I must take exception to one point, "Splunk - which has become the de-facto industry tool for log analysis." The market for SEIM is pretty big right now, and Splunk is certainly one of the major players, but it's by no means the only one or the market leader for security event correlation. Statements like that make me wonder if Splunk is buying advertising on Toolbox.

@Larry - Okay, okay - I guess I should have said "which has become MY de-facto tool for log analysis.". I love Splunk, and anyone that follows me on Twitter knows that I talk about it all the time. And no, they don't buy advertising AFAIK. Not that I would see any of the money. ;0)

@pand0ra: I think the whole industry is tired of APT - except those that are using it to sell the same old security solutions to the same old customers that "don't get it". APT simply exploits age-old problems that victims still don't understand, don't want to fix, or blindly accept risk for. Security professionals need to step up and educate at a level that the people with the purse strings understand - cost/benefit. Spend $$$ now, or spend $$$$$$$ later when customers leave/sue.

How do functional users deal with situations like this where 'security' already has the IT system so locked down that they haven't done anything other than 'keep the lights on' for the last ten years?

Every time there is an 'OMG the sky is falling' type article or 'hack' in the news, security 'audits' things (which is really just them going in and taking away access to various things that they don't think users need, the head of their group has actually said, "Take it away, if the users don't ask for it back then obviously they didn't need it".... some of the things taken away are used once a year at year end.... know how annoying it is to search for something that you know 'should' be there, and was there when you used it last year, but due to security 'audits' is now gone.

I'm just venting, I understand the need for security as much if not more than our IT security group, but we get 'security theater' based on media reactions, while after working here for 10 years I'm still asking, "Can I have a sandbox of the system functionality and tools that we own but haven't turned on in the last 10 years to see if there is anything useful in there." With the answer being, "No. There is stuff in there that 'might' be a problem." Keep in mind I'm asking for a copy of the Vendor provided publicly available DEMO environment, and being told, "No, we don't know what we need to secure, but we know we can't give you access to a DEMO environment".

When the security group knows that they don't know enough to stop bad things, and that the only real 'secure' system is one that isn't connected to the internet and is turned off... you can end up with some messed up 'security policies' ..... I'm sure the TSA would be so proud if they could see us now.... Security Theater (without actual security) at it's finest.

@chrish47: sounds like your security and business teams need to meet and talk about risk appetite and operating in the warm gooey middle of security. Polar security organizations do not contribute to business growth and success.

Keyword Tags:
From
the
Field
Sprinting
Through
An
APT
Casefile
Part
II

Disclaimer: Blog contents express the viewpoints of their independent authors and
are not reviewed for correctness or accuracy by
Toolbox for IT. Any opinions, comments, solutions or other commentary
expressed by blog authors are not endorsed or recommended by
Toolbox for IT
or any vendor. If you feel a blog entry is inappropriate,
click here to notify
Toolbox for IT.

Interested in information security? Like a good mystery? Addicted to shows like CSI? Want to see real-life challenges posed to ...
more

Interested in information security? Like a good mystery? Addicted to shows like CSI? Want to see real-life challenges posed to an investigator with over 18 years of experience? You'll find the entire casefile library here for your reading pleasure. Not only are the educational and entertaining, but highly addictive. You've been warned!
less

Receive the latest blog posts:

Share Your Perspective

Share your professional knowledge and experience with peers. Start a blog on Toolbox for IT today!

Copyright 1998-2015 Ziff Davis, LLC (Toolbox.com). All rights reserved. All product names are trademarks of their respective companies. Toolbox.com is not
affiliated with or endorsed by any company listed at this site.