Friday, October 30, 2015

Tool testing is a big deal, particularly in forensics where a malfunctioning tool can lead investigators to seriously wrong conclusions. In the area of memory forensics, we need to acquire memory using a tool that has a very minimal footprint. Memory can't be acquired without displacing other memory. We want to displace the minimal amount possible to retain as much unallocated memory as possible.

WinPmem has been a "go to" tool for many investigators performing memory acquisition. It's free, open source, lightweight, and command line. I love open source tools for forensics. Gives me confidence that I can always understand what the tools is doing. WinPmem is everything I'm looking for in a memory acquisition tool and I teach it in the SANS FOR526 memory forensics course (I co-author that course with Alissa Torres). I also do a lot of memory forensics in investigations for clients. I'll highlight all of the reasons another time, but for now just let me say that it's a real game changer.Upgrade Immediately? Not here...
When WinPmem recently updated to version 2.x, we didn't immediately update at Rendition Infosec. I was excited about the compression that it offered, but it also output in AFF4 and the only tool that really reads that is rekall. It's not that I don't love rekall - I do (in fact, I'll be publishing a new rekall plugin this week or early next week, so stay tuned). But sometimes I need other tools and we just didn't have the bandwidth to update all of our scripts to account for the AFF4 difference. Also (and very critically), we don't deploy forensics tools without testing. This has saved our butts more than once, but in this case the implications could have been huge.

Testing Thy Tools
After reviewing a presentation by Brent Muir on Windows 10 forensic artifacts yesterday, we noted that Brent made some pretty lofty claims about memory usage in WinPmem. Specifically, he noted the difference in memory usage between WinPmem 1.6.2 and WinPmem 2.x was pretty substantial. This wasn't really the theme of the presentation, more of a footnote. But it caused us to step up our game and prioritize WinPmem testing. I engaged Brandon McCrillis, an awesome Senior Infosec Analyst who works with me, to help out with some testing.

Wow. With WinPmem 1.6.2 we see memory use of 1.8MB. With WinPmem 2.0.1 we see memory use of 94.81MB. These came from the same machine. That's 93MB of memory you'll lose just by using a newer version of WinPmem for acquisition. It turns out that newer isn't always better.

Some testing with ELF output format (thanks to Alissa Torres for the idea and initial testing) shows that when the output format is set to ELF, the memory usage is substantially lower. But lower doesn't mean good. Even with ELF, we're still looking at pretty substantial memory usage compared to the 1.6.2 build (22.6 MB vs 1.8MB). Note that we're not comparing apples to apples here. WinPmem 1.6.2 allowed for raw output formats but that ability was removed in WinPmem 2.x. It appears that the formatted output impacts the memory usage (and pretty dramatically at that).

But WinPmem 1.6.2 supports output in ELF format too. Does using ELF output format in 1.6.2 cause increased memory usage? Testing showed that this condition was unique to WinPmem 2.0.1.

Conclusion
We were pretty amazed at what we found. WinPmem 2.0.1 uses a LOT more
memory than 1.6.2.
Although we began investigating this thinking it was a Windows 10 issue,
we quickly discovered that it was present on all OS versions. For the
time being, we recommend using 1.6.2. There's plenty more information in the whitepaper we put together, but this is enough to get the gist.

Blatant Disclaimer
I am in no way indicating that you should not use WinPmem. I personally love the tool and thank Michael Cohen for making it available. He's been made aware of the issue and will hopefully patch in the very near future.

Sunday, October 25, 2015

I was pretty openly critical on social media about the silly "cyber rifle" publicity stunt at the AUSA convention a few weeks ago. CPT Brent Chapman, spoke with Popular Mechanics and described the build of the cyber rifle as an idea we call tactical making, or expeditionary making." Additionally, Chapman goes on to say:

In the future, when targets are guarded by drones and bunkers are vulnerable to exploits, soldiers could easily cobble together practical cyberweapons that cater to their specific needs on the spot, without having to radio back to home base for equipment. " If the Army supports and funds the ability for that infantry platoon leader on the ground to rapidly fabricate a solution with his organic elements (in this case, the "cyber capability rifle"), then we can save lots of time and money,"

All of the tech was placed onto the rifle frame, making it easier for senior military leaders to appreciate.

Of course, having a maker site claim that supplying the Army with maker equipment (on a wide scale no less) is no more surprising than Boeing lobbying for continued support of the KC-135 and KC-X projects. While I'm happy AdaFruit supports the ideas of Army makers, you can hardly call this endorsement unbiased.

Pretending to "shoot down" a drone with the "cyber rifle"

I'm all about enabling technology in military operations. And I think we all know that if you include the word "cyber" in the name of your project, you automatically get funding. But when you "place the tech onto a rifle frame" you don't "make it easier for senior military leaders to appreciate" - you make your leaders stupid. They equate the capability with the rifle. At the range the drone was "shot down" from, the same could have been done from the soldier's iPhone. Yeah, the antenna does provide some extra range, but having it mounted on the rifle frame didn't do anyone any favors.

Also, it took some digging to determine that the original capability didn't use jamming and instead took advantage of a known vulnerability in the Parrot Drone. This is where the rifle analogy really breaks down. When you build a rifle that shoots a bullet, it inflicts damage against all targets equally. But cyber capabilities (or God help me, "cyber bullets") are only effective against a particular technology. While the "cyber rifle" ground an unpatched AR Parrot, an entire battalion of soldiers with cyber rifles are completely ineffective against my Husban X4.

I'm not sure what agenda CPT Chapman has going in trying to convince the Army that "expeditionary making" is feasible. My company, Rendition Infosec, does what he is talking about. We research vulnerabilities to create new 0-days for attacking software in the customer environment, and we regularly use Raspberry Pi hardware with customized software to get the job done. We even own a 3D printer for building device enclosures. But there's a pretty significant difference Rendition and CPT Chapman's proposed "expeditionary makers." I could hire maybe one in 1000 infantry platoon commanders that have the skills to build field expedient cyber weapons for exploiting known vulnerabilities. But even then, are we to believe that forward operating bases will have the equipment available to build tactical cyber devices? I think this is a pipe dream that is more than a decade away in the best case.

I've had a number of disconcerting in-person exchanges with military leaders concerning cyber operations (these two were from unclassified environments). In one exchange, a one star general asked if we could make the DVD drives of enemy computers explode since "I saw that on Mythbusters." In another exchange, a two star general talking to CND operators demonstrated his mastery of the cyber domain by explaining that "data packets are like bullets and your walls of fire are like the armor that repels them."

I'll assume that by "walls of fire" he meant firewalls, but that's not the point. His understanding of that which he was commanding was laughable. It's on par with an Air Force commander who thinks his squadron pilots flying dragons. And he's making decisions about something he has no clue about. I don't need to know how to fly a bomber to understand its capabilities, but I should understand some key points like crew rest, flight ranges, station time, etc. if I'm making decisions about its use. Cyber should be no different.

We need to be educating our leaders, not trying to explain things in terms they already understand - especially if those are inappropriate analogies. Military officers, especially those at the field grade and general level are not stupid (well most of them). Tell them the truth so they can make good decisions. The alternative is that leaders become increasingly confused about their own capabilities. And let's be fair, we don't want leaders confused about the capabilities of their tanks or bombers. Why should they be confused about their cyber capabilities?

Saturday, October 24, 2015

Last week the new hashtag #6wordcyber appeared and it spawned some seriously awesome tweets in the infosec community. I think @thegrugq originally started it, but whoever did has my gratitude. I've collected a few of the best tweets here - many of which had CISSPs and CEHs as the butt of jokes. Without further ado, here's the collection.

I've railed in the past on completely unqualified "infosec degree" holders. Apparently I'm not the only one who has negative experiences there.

Friday, October 23, 2015

The US DoJ has released information about a case in which a suspect recently took a plea deal. The individual, Chris Woods, was let go from his workplace in January 2014. The victim company (unnamed in the press release) smartly terminated his access. However, when firing developers (or anyone who works in IT) you have to take extra precautions. In this case, the fired employee was a web developer. He may have been able to gain access to the credentials of others with little effort, particularly if the organization was using federated authentication.

Mr. Woods used the credentials of another employee without their knowledge or consent and caused more than $61,000 worth of damage to the victim organization. I was previously unaware that you could be prosecuted without the victim organization being named, but apparently that is true here. The press release specifically does not name the victim. However, it probably provides enough information to determine who the victim is.

MFA could have helped
There are a few interesting things I'd like to point out about the case. First, insider threats are very serious. It's good that the victim company had a policy to terminate access when the employee was terminated. That's a good place to start. But they didn't count on the employee having other credentials. These credentials may have belonged to another employee, or may have been a shared account to which Mr. Woods had access. If it were shared account, it should not have been remotely accessible. However, if it were the account of another employee multi-factor authentication could have prevented the entire incident. Of course, with IT employees, even MFA can't eliminate all risks. IT employees have the technical chops to plant beaconing backdoors or even register their own MFA tokens on the accounts of others. In short, while MFA provides good defense in depth, don't assume that MFA is a silver bullet.

Terminating Access
At Rendition Infosec, we advise that clients should have two procedures for terminating access - one for general employees and another for those who work in IT or have other elevated permissions. For the latter group, the risks are increased and the response should be as well. Anything less is inappropriate for the circumstances. In organizations where the IT user had access to group accounts (stop using group accounts please) or service account credentials, plans for those to be changed should be part of the employee termination process when possible.

OSINT Exposure
DoJ shouldn't assume that just because they release redacted information that others can't follow the leads. First off, the target area isn't very large. The press release says Winchester, VA. Second, the perpetrator's name and profession are proudly listed in the press release. A disproportionate number of information technology professionals use LinkedIn and other social media sites (and the terminated employee is part of this demographic). We do social media exposure analysis for companies all the time, but this one was ridiculous.

A single LinkedIn search looking for "Chris Woods" who was employed in Winchester, VA as a web developer but terminated employment on or about January 2014 turned up the victim organization in minutes. I called the victim organization's media relations department to ask if they would confirm or deny their involvement, but I haven't received a response. This is the part where my lawyer would probably advise not to name them since I have nothing conclusive. And I'll take his hypothetical advice by not naming them. You can probably figure out the victim as well, but I'll leave that to you in case you care. I will say that if I'm right, there may have been regulatory reporting requirements on the victim's part - depending on what information the developer accessed illicitly.

Thursday, October 22, 2015

The Problem
Three researchers identified serious vulnerabilities in Western Digital hard drives using digital encryption. Many of our clients, particularly those in health care, enforce the use hardware encrypted drives to ensure the protection of regulated data such as PHI. At Rendition Infosec, we think this research has HIPAA, PCI, and other regulated data implications.

If you are interested in the full gory details you should read the paper (probably with the help of a mathematician). But some of the encryption faux pas are laughable. The paper is available here and slides are available here.

Holy 40 bit encryption bat man!
In one case, the developers seed the encryption algorithm with a hardware random number generator. But it turns out the random number generator isn't so random after all. It just cycles through 255 random 32 bit values. Forget about worrying whether you have 512, 1024, or 2048 bit encryption. Try 8 bit on for size. Other variations are added into the algorithm so we really have a 40 bit key, but now we're in the area of WEP and less than DES, both critically broken given today's computing power. Note that even if this method had worked perfectly, the output would have been a 64 bit key, which seems arbitrarily small.Someone tell Jimmy the date of manufacture isn't random at all
Another model uses the manufacture date and time as a seed to generate the key. This is nowhere near random and many models have the manufacture date printed on the case of the hard drive. Super fail.

Several additional attacks are presented in the paper - read up on them if you are interested. The point is that mickey mouse apparently built the encryption. But what's the impact to your business?

Business Impact
Business impact is where the rubber meets the road. If you've been relying on hardware encryption to protect confidential and/or regulated data, you should probably re-evaluate the decision. Many of our clients like hardware encrypted drives, particularly for use with slower machines, because it uses CPU cycles on the drive controller, not your laptop/desktop. Don't think this problem is unique to WD. We would be remiss to think that WD is the only vendor to have these issues.

You should also scan for the known vulnerable drives in your environment. Don't trust purchasing records. I can't stress this enough - don't simply scan purchasing records and say "we didn't buy any of these drives through purchasing, therefore there are none in the environment." At Rendition Infosec, we find additional "off books" hardware during practically every assessment. Hardware inventory is hard - it's even harder when the hardware is a peripheral. If you don't have a solution in place to scan for these drives, contact me and I'll be happy to help you.

Finally, review your lost hardware (at least that which was reported). Were any of your lost drives containing regulated data one of the impacted models? Did you issue a breach notification? Probably not if the drive was encrypted. But given that the drives are now known to be trivial to decrypt, you may need to reconsider your breach notification decision. I am not a lawyer, but you should talk to internal counsel.

Wednesday, October 21, 2015

If your communications pass through Germany, the telecom companies there are required to keep your metadata for communications and pass it to law enforcement on demand. If you use SMS in the country, then location data will also be stored for four weeks for law enforcement use.

The law does specify that Germany should evaluate how effective the data is in preventing and solving crimes. There does not appear to be any restriction on the laws applying to only German or EU citizens - it appears that as long as your communications transit German telecom, you're metadata is being recorded, retained, and shared with law enforcement on demand.

With "law enforcement" often defined very broadly, huge number of people may have access to the metadata. It is unclear whether there will be controls on access to this data, or what those controls would be. The data retention policies may create issues for companies who do business in Germany that they would prefer to remain private. Laws like this also create precedent for other countries in the EU (and elsewhere) who may follow suit.

Tuesday, October 20, 2015

Last week while teaching Enterprise Incident Response (FOR508) for SANS, I stressed the need for device inventories while performing IR. How can you investigate that which you don't know about? One of my students asked me how to get a device inventory if they can't run discovery scans. Don't forget, hardware inventory is #1 on the SANS 20 Critical Security Controls.

Discovery scans (a flashy name for port scans) are often used to identify endpoints, but some folks are concerned that they will cause problems in the network. Over the last decade, I've heard horror stories from clients about how a single errant SYN packet will cause their extremely sensitive devices to fall over.

Some of this is hyperbole. Some of it is reality. I've worked with devices where total failure is the outcome of a half open scan. The device simply doesn't recover and the service is stuck in a half open state until someone power cycles it. Others can't handle a SYN to closed port or a full connect scan. This is unfortunate and certainly makes a great case for not doing discovery scans at all. After all, we can't cause a denial of service just to get inventory data. Makes perfect sense. Or does it?

It tuns out that's a false position. If you have devices on your network that can't stand a port scan, get them identified and segmented today. You shouldn't use the threat of device failure to argue against a device inventory. It's a virtual certainty that sooner or later an attacker will get on your network, and when they do, they'll port scan to find new hosts to pivot to. The difference is that they won't care about how many devices fall over in the process.

Monday, October 19, 2015

Just a public service announcement. Oracle released an announcement that patches will be released tomorrow for a number of products. No surprise, Java is among the vulnerable apps (shocked face). But the most interesting product on the list in my opinion is the MySQL remotely exploitable vulnerability. Oracle said in it's announcement that it has a CVSS score of 9.0. It's been a while since we've had anything even close to that in MySQL.

Of all the apps being updated, I think MySQL is probably the most important to most small and medium enterprises. Sure Java is important, but if you don't have a patching plan for Java, you're already owned (sorry, hard truth here). Some of the other enterprise apps on the list are pretty esoteric and/or not remotely exploitable without authentication. But a lot of businesses I work with at Rendition Infosec have MySQL instances exposed to the Internet and don't really have good patching plans surrounding them. Businesses should take a moment today to evaluate their exposure and prepare to apply patches tomorrow as soon as possible. We should expect one-day exploits reverse engineered from the patches to become available very quickly.

Saturday, October 17, 2015

Aviva (an insurance company) has just disclosed that it had another insider incident involving the unauthorized theft of data. The company revealed that it believes that insiders leaked information about customers who had been involved in accidents. This led to current and former customers receiving nuisance calls from ambulance chasers.

A couple of key points here:

I checked the Aviva site today and there is no note on the homepage that this even happened. Aviva did notify all impacted individuals. But how are they sure they have notified all impacted parties? If they had such great logging in the first place to know specifically who was impacted, why didn't they catch it more quickly? While they are probably following the breach notification regulations to the letter of the law, the lack of transparency to the general public does not exactly engender trust. If your organization has to notify of a breach in the future, you should conspicuously notify those who are impacted. It's just as important to explain to the non-impacted parties how you know their information wasn't compromised. In the age of unchecked data breaches, "trust us" doesn't really cut it anymore.

Another point to consider is that this clearly demonstrates that the insider threat is real. Many of the clients I deal with at Rendition Infosec dismiss the insider threat completely. After all, that's something that happens to other organizations. Our organization would never have an insider stealing data. In Aviva's case, the theft was discovered because many customers complained about nuisance phone calls. But other data thefts might not have the customer to detect the fraud. But theft of trade secrets and other intellectual property is particularly difficult to detect. Organizations that have not taken a serious look at their strategies for detecting insider threats should do so before they are hit with a breach.

Friday, October 16, 2015

If you want to get ahead in DFIR (or any security discipline), your reports have to be understandable. I regularly see people who are more valued by senior leadership, yet less technically adept, get ahead simply because their reports are easier to digest.

I can't recommend enough that you improve your writing style and avoid the use of complicated language when you can avoid it. One of the tools I use for this is the grade level check built into MS Word. Of course not all parts of a report can avoid the use of technical jargon. But when writing an executive summary, I try to stay at or below the 7th grade level.

Apparently I'm not the only one to take this to heart. California's annual data breach report notes that the average reading level of notices sent to consumers about data breaches was 14 in 2012 and 13 in 2013. Since most of our populace doesn't read at this level, this is obviously problematic. Organizations that send breach notices using language the target audience can't understand can expect mistrust to be the outcome. The California breach report specifically notes:

While concerns about litigation risks may cause companies to draft notices in legalistic language that is less than accessible, we encourage companies to work with communications
professionals to improve the clarity of their notices. Good writing can make the notices
more readable, using techniques such as shorter sentences, familiar words and phrases,
the active voice and a layout that supports clarity.

Organizations seeking to instill confidence in customers should use plain language in breach notifications. Security professionals looking to instill confidence in executives should do the same. If you use language in the executive summary that your executives can't understand, expect negative outcomes.

Thursday, October 15, 2015

What information do you consider to be personal? Obviously your social security number, birthdate, middle name, etc. But what about your license plate - should that also be considered personal information protected under breach notification laws? California thinks so.

On October 6th, the CA governor approved a new bill (S.B. 34) which was approved by the CA Senate on October 13th. The new bill treats ALPR (Automatic License Plate Reader) data as private information, subject to breach notification laws. While not directly applicable most organizations (few of us have access to ALPR data), this is an interesting development because it's an example of expansion of the scope of what's reportable when it comes to data breaches. I expect that in the future we'll see more expansions of the definition of personal information.

In the next post, I'll touch on some other news in data privacy laws that may impact your business.

Sunday, October 11, 2015

I routinely work with people who argue that their 8 character passwords are secure because they force complexity requirements. This simply isn't true. Picking a passphrase (or simply a much longer password is mathematically more secure. Some smart person will point out that if I know all passwords consist of combinations of only dictionary words, then I effectively have created shorter passwords. While this is true, not everyone creates a password this way and intentionally misspelling a word (this happpens much more than you might reelize) makes things that much more secure.

Let me start by saying that I’m sure I’m not the first to
write about password length vs. password complexity.Heck, XKCD even
did a comic on it.So why am I writing
this post arguing for length over complexity?I was out to lunch with a client the other day and the subject came
up.He said “you should write a paper on
that.” I told him there were probably other blog posts out there explaining it,
but after a quick Google search he argued none that explained it as
simply.So that’s my mea culpa.If you don’t like this post, go complain to
my client.I’d tell you who they are,
but oh yeah (too bad for you), we’re NDA’d.

Password policies are all about making passwords hard to
guess and the hashes hard to crack.When
cracking hashes, there are two approaches (I’m keeping this simple).The first approach is to guess a password you
think might be in use (e.g. a dictionary word or ‘123456’).The second approach is to simply brute force,
trying every possible combination of characters.It’s the latter (more interesting) case this
post will examine.

When brute forcing passwords, we have to assume that on
average the attacker will guess correctly in half the total number of
possibilities.So which matters more,
length or complexity?Well, let’s take a
look at a standard 8 character password.We’ll assume our average user won’t use numbers, capital letters, or
special characters.This leaves us 26^8
possible passwords.Of course we’re
ignoring that users will pick recognizable letter patterns (e.g. words).

What if we teach our users about the magic of the shift key
and they start using capital letters?That changes the base to 52 but the exponent remains the password
length.Now we have 52^8.This is certainly an improvement.

26^8 = 208827064576

52^8 = 53459728531456

If we educate users about the fact that they can type
numbers in their passwords too then the base increases to 62.Okay, that’s a bad joke since two of the top
10 most popular passwords are 123456 and 12345678.But how big of a difference is that really?

26^8 = 208827064576

52^8 = 53459728531456

62^8 = 218340105584896

Not too shabby.Add
some special characters in and you are now up to a base of 94.How big is that?

26^8 = 208827064576

52^8 = 53459728531456

62^8 = 218340105584896

94^8 = 6095689385410816

So clearly adding complexity to a password works.In fact, forcing users to put a special
character in their passwords first is best, followed by uppercase.Adding a number only increases the base by
10.

But what about length?Let’s assume for a moment that we don’t care about complexity but force
the length to change instead.In this
case then we’ll assume that our users will use only lower case characters,
giving us a base of 26.We’ll also force
the users to have a minimum length of 12 characters.

26^12 = 95428956661682176

26^8 = 208827064576

94^8 =6095689385410816

Note that a 12 character password with no complexity
requirement is better than an 8 character password requiring all four character
food groups.Math just works this
way.When you increase the exponent, you
radically change the game for the attacker.

What if we follow best practices and move to a 15 character
password?Why 15 characters you ask?Because that’s the magic number greater than
14 where LANMAN hashes are no longer stored.Of course you could turn this off on the domain controller, but forcing
a 15 character password is great for security and kills two birds with one stone.For this, we’ll assume that the user is uber
lazy and only uses digits for their password (the worst possible case), giving
us a base of 10.

10^15 = 1000000000000000

94^8 =6095689385410816

Well, we finally found something worse than the 8 character
password with maximum complexity.But
let’s be real here: no user (with the possible exception of Rain Man) is going to try to remember a 15 character string
of digits.Your users are much more
likely to combine some letters together into multiple words like
“airlinefoodughh”.What’s the complexity
there?

94^8 =6095689385410816

26^15 = 1677259342285725925376

Okay, the difference here should be clear.By forcing longer passwords we increase
security. Everybody knows that.But
users can easily remember a short phrase like “airlinefoodughhmystomach” or
“iwanttorideahorse”.They are quite a
bit worse at remembering things like “Jw176!@t”.And let’s not kid ourselves. Which password
is more resistant to brute force guessing?Incidentally, it’s the one that’s easiest to remember.

I’d be remiss not to note that some detractors will say that
passwords which only string dictionary words together need not be brute forced
and are much easier to guess.I’ll
concede that if the attacker knows
that all your passwords are just strings of dictionary words, then the point is
valid. But your attacker doesn’t (or
at least shouldn’t) know this.In my
many years of password cracking, I’ve never (to my knowledge) even tried the
password “iwanttorideahorse” or it’s many derivatives.

To those who may note that users will be tempted to write
longer passwords down, let’s not kid ourselves.Your users are doing that anyway.Long passwords will never prevent users from compromising themselves in
the physical domain, but will definitely thwart attackers trying to remotely
brute force an account.And in the event
that your hashes are stolen you can rest easy knowing that “soupsmellslikefish”,
while easy to remember, is less likely to ever be recovered by a cracking
program than your lame 8 character password that used a keyboard walk.