Saturday, December 21, 2013

Lenny Zeltser sent me an article last week about a new keylogger (md5sum 21f8b9d9a6fa3a0cd3a3f0644636bf09). The article mentioned the fact that the keylogger uses ToR to make attribution more difficult. That's interesting, sure. But once I got the sample, I noticed that it wasn't compiled with the Visual Studio of Delphi I'm used to seeing malware use. Then I re-read the article and found a mention of it being compiled in Free Pascal. Well, that changes the calling conventions a little, but nothing we can't handle. However, it is worth noting that IDA doesn't seem to have FLIRT signatures for this compiler, so library identification is out (sad panda time).

EDIT: FLIRT is used by IDA to identify runtime libraries that are compiled into a program. Reverse engineers can save time by not analyzing code that is linked into the binary, but not written by the malware author. It is not unusual for a binary to get 30-50% (or more) of it's functions from libraries. In this case, IDA identifies 3446 functions. However, none of them are identified as library functions. To find probable user code, we'll anchor on cross references from interesting APIs unlikely to be called from library code.

The first thing I need here is an IOC to track this thing. After all, as I tell all my classes, IOCs pay the bills when it comes to malware analysis. Sure, I could fire it up in a sandbox, but I'd so much rather do some reversing (okay, if not for this blog post, I'd use a sandbox, especially given the lack of FLIRT signatures for this sample).

For IOCs, I always like to find a filename, something that is written to the system. I searched the imports table for API's that would return a known directory path. In this case, I found GetTempPath, a favorite of malware authors. Guess where system.log can be found? You guessed it, in the %TEMP% directory of the user who executed the code.

However, it is worth noting that later in the function, there is a call to DeleteFile. I didn't take the time to fully reverse the function yet so I don't know if this will always be called (a quick look makes it appear so). But we're after some quick wins here (and this isn't a paid gig), so we're moving on. This means that our %TEMP%\system.log file may not be there after all. Bollocks... Well, you win a few, you lose a few.

Well, now that's interesting... A call to GetTickCount where the return value is placed in a global variable. This might be some sort of application coded timer. Or, it could be a timing defense. Sometimes malware will check the time throughout program execution. If too much time has passed between checks, the malware author can infer that they are being run under a debugger (a bad thing for a malware author). Note that GetTickCount returns the number of milliseconds since the machine booted. Millisecond precision may not be sufficient for some manufacturing processes, but for detecting debuggers it will do just fine.

Let's see if we can find the timing check and see what's actually being done here. To do this, cross reference from dword_462D30.

Good: there's only two other references that IDA found. This might be easy after all. Also, to keep myself sane, I'm going to rename this global variable to time_check.

So is this a debugger timing check? If it is, the malware author is doing it wrong (really, really, wrong). Nope, in this case, the malware author is checking to see if more than a full day has passed since the original check to GetTickCount. The old value is moved into ecx. The return value of GetTickCount is placed in eax (like the return value from all Windows APIs, per the stdcall convention). Then, the old value is subtracted from the new value. A check is performed to determine whether more than 86,400,000 milliseconds have passed since the original GetTickCount call. That value should look familiar to programmers, it's the number of milliseconds in a 24 hour period. Okay, so this means that the malware is going to do something once per day while the machine is booted...

Examining the code further, we note that the only difference in execution at this location is a possible call to sub_42BBB0. Wow. Glad I wasn't debugging this! I might never have seen whatever was in that subroutine (my debugging sessions tend to last far less than 24 hours).

After jumping to sub_42BBB0, I found that it was the subroutine that brought us here in the first place. This makes sense. To prevent this code from executing over and over, the value in the time_check global variable would have to be updated. So maybe the %TEMP%\system.log IOC is a winner after all... Maybe it is purged and recreated once every 24 hours? I don't know yet, but I've started to unravel some functionality that my sandbox wouldn't have (and that's what real reversing is all about).

I'll continue later this week with a further look at the malware. I know we didn't hit the keylogger portions at all. However, in all fairness I was writing this as I was reversing. I still have holiday shopping (and courseware updates) to do today, so this will have to suffice for now. Hopefully this is of some value to those who are interested in reversing.

I fully expect this sample to show up in one of my SANS courses (FOR526 and/or FOR610). It has some neat properties and is a real treat to dive in to. If you'd like to get a copy of this (and other samples), join me at the SANS CTI Summit in DC this February where I'll be teaching malware reverse engineering. This year, I added a sixth day to the course: a pure malware Netwars capture the flag challenge. This means you get a chance to put your new reversing skills to the test in the class! I look forward to seeing you in a future course.

Thursday, December 12, 2013

You've probably seen the story of Eric Rosol, the man who was just ordered to pay $183,000 to Koch Industries for participating in a DDoS attack against their website.

According to publicly releasable information, the site only went offline for 15 minutes as a result of the attack. The attack itself reportedly lasted less than 5 minutes and Mr. Rosol only participated in the attack for 1 minute. As far as we know, Mr. Rosol did not initiate the attack, which was accomplished using Low Orbit Ion Cannon (LOIC). LOIC is a DDoS attack tool that supports crowd sourced attacks over IRC. Mr. Rosol might have connected his LOIC instance to IRC or manually started and stopped the attack (I don't know which one for sure, and it isn't relevant for this case). He does however admit that he participated in the attack.

So what were the damages?
The actual damages for a DDoS attack on a website are hard to quantify. If you took down amazon.com for instance, it would be easier to quantify the losses by examining a comparable sales period. But in the case of amazon.com, the website directly drives revenue. What happens when the site doesn't generate revenue directly? What if it's a site that only serves as a "front door" or advertisement for the company? Certainly a loss is still incurred when the site goes offline. Investors get scared about the company's security and real system admin time is used to monitor and respond to the incident. But these costs get pretty murky to quantify. In this case, Koch determined that the cost of the outage was $5,000.

Should Mr. Rosol be responsible for damages?
Personally, I think it's a big stretch to say that Mr. Rosol should even be responsible for the entire $5k cost (if that really is the cost). He may be the only person who was arrested in this specific case, but the first 'D' in DDoS means Distributed. There were lots of people involved. Now, please understand that I am not a lawyer, so I could be really wrong here. But when multiple people are captured on surveillance video performing acts of vandalism but only one is caught, are they fined for the entire damages? What if additional suspects are caught? Will they also be fined for the entire damages? That sounds dumb to me, since it appears that victims could obtain multiples of the actual damages.

Wait, was it $5,000 or $183,000?
So this is where the case gets strange, and quite honestly, infuriating. When Koch Industries suffered downtime due to the DDoS that Mr. Rosol participated in, they decided to bolster their defenses against future attacks. To that end, they hired outside security contractors. It isn't known what the expenses entail, but they reportedly spent $183,000 with the contractor. This value was used by the judge to order a fine for Mr. Rosol.

Mr. Rosol did the crime, he should pay.... right?
The $183,000 fine represents a significant misunderstanding on the part of the justice system about computer crime. If you disagree, work through this intellectual exercise with me. Suppose that Mr. Rosol committed a physical crime, such as forcibly blocking the entrance to a convenience store. He was only able to block access to the store for a short time before the police forcibly removed him from the premises. During the "blockade" the convenience store estimates that they lost $5,000 worth of business (a hard number to quantify). The convenience store does not want this type of attack to ever happen again. The store hires a contractor to study the event. The contractor realizes that Mr. Rosol exploited a design flaw in the store entrance layout that allowed him to block access in the first place. The contractor recommends changes to the store entrance, some of which are implemented. The total cost for the contractor and store renovations is $183,000. In this physical crime analogy, would Mr. Rosol be on the hook for the $183,000 spent studying the event and making store renovations? Of course not. I can't think of any examples where this might be true.

Great analogy, why did he get fined $183,000 then?
I have no idea why Mr. Rosol got fined so much. I don't have the transcript of the sentencing proceedings, but I'd love to know what Mr. Rosol's lawyer argued to the court. Did he or she use a similar analogy? If so, did the court fail to understand the argument or did it just not care? I predict that Mr. Rosol's fine will be challenged in the legal system. I don't know the legality of any challenge since Mr. Rosol plead guilty to the offense. In any case, I think that this is a wakeup call for everyone in the computer security field that the justice system still doesn't "get it." We need reform of the CFAA (the law under which Mr. Rosol was charged) and we need it now. We need better sentencing guidelines. But what we really need are courts that understand how technology and computer crime actually work.

Friday, December 6, 2013

The bulk of this blog post came from the answer I gave to a question that one of my SANS FOR526 (Memory Forensics) students sent me about file formats and extension names. Specifically, he wanted to get some information on the difference between files with a .vmem extension and the .raw files output by DumpIt, a great, free memory dumping utility. I told him:

The .vmem extension is used by vmware to indicate
that a file represents the contents of physical memory on a guest
virtual machine. You would get a filename of .vmem if you used the
snapshot method of obtaining a memory capture from a VM. Alternatively you can capture the .vmem by pausing the VM, but this is less ideal since network connections are broken and VMware tools provides notifications to software in the VM guest.

In the case of
a physical machine, dumpit will provide a filename with a .raw
extension. Presumably this is used to differentiate it from memory
captures that include capture specific metadata in the file format
(HBGary's .hpak format is one such example). Another example of a memory capture with metadata might be an .E01 captured with winen.exe (provided by EnCase). Your tools will work
identically on a .raw and a .vmem file.

Of course there are many other file formats where physical memory may be found. One such format is the hibernation file. I love using hibernation files in cases, especially when volume shadow copy is enabled on the machine. Sometimes I have several historical memory images that I can perform differential analysis on. This may help determine when a compromise occurred, particularly if anti-forensics techniques were employed to destroy timestamps on the disk.

A final memory image format that comes to mind is the crash dump. While this requires that a machine be appropriately configured to create a dump, many are (especially servers). The crash dump is particularly relevant to rootkit detection as the fateful BSOD is most common when loading new kernel mode software (many rootkits are implemented as device drivers). There are several tools that can convert kernel memory dumps (.DMP files) into physical memory dumps (to be consumed by memory analysis tools). But they aren't needed if all we want to do is run Bulk Extractor (BE). Because the memory contents in .DMP files are not compressed, the data can still be accessed. The additional metadata added to a .DMP file (debugging related) isn't a concern for a tool such as BE that ignores internal file structure.

My student went on to ask whether he could use Bulk Extractor on a .raw file acquired by DumpIt. In FOR526, one of the things we teach is using Bulk Extractor to parse memory for artifacts such as email addresses, URLs, and facebook IDs (among others). If you aren't using BE in your cases, you owe it to yourself to give it a try. At the banner price of free, it's something we can all afford. I told my student:

In a larger sense though bulk extractor can be used on
any image file of any format that doesn't use compression (it won't natively handle EnCase .E01 compression for instance). But
otherwise, just point bulk extractor at the image file and go to town.
That's one of the things that makes BE so magical. If you have an SD
card or USB drive from a device that uses some unknown filesystem, BE
can still do it's magic because it doesn't try to understand the
filesystem at all. Same goes for memory, it's just doing pattern
matching, so the underlying container structure doesn't matter.

If this sort of thing is up your alley and you want more information, come take FOR526 at an upcoming event. We introduce Windows memory forensics and cover it in sufficient depth to immediately apply memory analysis skills in your investigations. Rather than focus purely on theory, we ensure that you walk away with skills to hit the ground running.

Saturday, November 16, 2013

Or at least until they fix the security problems that may or may not be there are resolved. I don't care where you sit in the Obamacare debate. Whether you think
it's a good idea or a bad idea doesn't matter. If you're an infosec
professional and you aren't talking about the security of healthcare.gov to
your friends and business associates, you're falling down on the job.
Who, you? Yeah, I'm talking to you. As an infosec professional, you have
unique insight into security problems that the standard public doesn't have.

Can the US Government procure IT
security successfully?
In a speech today, the president admitted that "one of the things [the
US government] does not do well is information technology
procurement." Having worked around government IT for years, I think
that's a gross understatement. But okay, so at least he knows we suck at
IT procurement. Surely we do a better job at information security,
right? I mean, security is probably separate from "IT
procurement" in the president's mind. So I'm sure they've got the
security of healthcare.gov worked out.

Or maybe not…
Earlier in the week HHS sources noted that public and private sector workers
were operating 24/7 to get the site fully functional. Certainly they're
following best coding practices while working 24/7. I'm sure there's a
project plan, complete with regression tests so nothing bites us from a
security perspective. After all, when was the last time re-coding
something introduced a bug?But the
interesting part is that for all the talk of fixing the site to ensure that it
is available, I rarely hear people talk about security (or how security can be
ensured in such a rapidly changing code base).

But that’s not the worst of it!
The person at HHS responsible for deploying healthcare.gov didn’t know that
end to end security testing hadn’t been completed when the site when live on
October 1st.He testified to
congress that details of existing security problems had been hidden from him
(literally claiming that he didn’t get the memo).This points to a clear failure in the
security of the site when the person making go/nogo decisions isn’t “in the
know” on critical security issues.When
asked if he thought that healthcare.gov was as secure as his bank website, he
refused to answer and said instead that it “complies with all federally
mandated security standards.”Whoa!
WTF??? Hold the phone… you want me to put my personal data on the site when you
have no confidence in it? Yeah, that’s pretty much insane.Based on this alone, healthcare.gov should be
taken offline.That is until such a time
as government officials can answer under oath that it is as secure as my online
banking site (which I think, by the way, is a pretty low bar).

But are there really critical
security issues?
I’m guessing that there are.The site
is vast, complex, and there have already been hushed reports of information
disclosure vulnerabilities.The fact
that vulnerabilities were discussed in closed sessions in congress tells me
that there is something to hide.I’m
guessing that “something” is huge.I’ve
performed security testing on similar sites of lower complexity and found serious
vulnerabilities.If you’re thinking that
the government contractor who developed healthcare.gov is better than those
I’ve had the privilege to test, just remember that the same contractor can’t
keep the site online under even moderate load.How sure are you that their security engineering is better than their
availability engineering?Remember, this
question isn’t rhetorical: you’re literally betting the confidentiality of your
private information on the answer.

Call to arms
Hopefully this has given you some food for thought.I’d like to point out that I haven’t performed
any security testing on healthcare.gov (I don’t have a CFAA letter and I’m too
pretty for jail).However, if this has
gotten you thinking, then spread the word to those who will be using the
site.Better yet, call your congressman
and demand independent end to end security testing of the site.The fact that the site went live without it
is a huge failure, and it’s one we can’t afford to continue.

Wednesday, November 13, 2013

This is a non-technical post and doesn't have anything to do with security (other than the people involved). Last night I got off the plane in SLC to speak at the Paraben Forensics Conference. My twitter feed was blowing up with the hashtag #ada. A great guy in the infosec community, @erratadave, was losing a family member to a long fight with kidney failure. She's only four years old. Her remaining kidney finally kicked out and they didn't expect her to make it through the night. She apparently wanted to see her name in lights, trending on twitter.

To see the outpouring of support from the infosec community, check this twitter link. The outpouring was unreal. I couldn't believe how many people, and how many big names (Dan Kaminsky, RSA, etc.) came through and tweeted out support for this little girl. I don't think the tag ever trended, but the support was absolutely inspiring.

Seeing this level of support was just awesome. If you are part of this community, give yourself a pat on the back. We may have our disagreements, but we do a great job of supporting when it's really on the line. If you're successful in this field, you're probably a type A+ personality. I'm not proud of it, but I've sacrificed family time for work too many times to remember. So take a minute today (if you happen to actually be at home) and hug your spouse, your kids. Tell them how much you appreciate their support. Thanks for being a part of this community. You make it great.

Friday, November 1, 2013

I saw this story about fake Affordable Care Act (ACA) sites that came through my news feed today. I love this story. I actually tried to find a few fake sites using Google searches, but I didn't find sites immediately (and I'm lazy, so I just started writing this instead).

I've been using fake insurance/health benefits sites in my social engineering attacks for years. It's one of my "go to" techniques. But of course, here the attackers are taking advantage of the fact that everyone has seen information about the ACA in the news. I'll be starting a new SE engagement this month and you can bet I'll be mentioning the ACA in my emails and phone calls (I'm opportunistic like that...).

But if you missed the story before (or thought it was hogwash), pay heed. Real attackers (you know, guys without a CFAA letter) are using this. If you run a security program, now might be the time to let your employees know to be on the lookout for this specific attack. You might think that a gentle reminder about generic security and social engineering threats would be more effective than honing in on a specific attack. I'm here to tell you that's not the case. It's been my experience that warning about specific attack scenarios has a much higher (short term) rate of return.

Do your users a favor, tell them to be suspicious if contacted about ACA related issues, particularly if not being directed to a .gov website. Tell users not to rely on seeing their company name in a URL. I regularly create subdomains in my SE sites, so users will click on http://companyname.evilsite.com. I find the warnings telling users to check for "secure" sites to be relatively hilarious. It takes less than nothing to get an SSL cert for your site. Any attacker worth their salt has purchased a certificate, so this isn't a reliable check either. Bottom line: your users need to know what's currently being used in attacks. Yes, they should be always vigilant. But I view these warnings like knowing where the local speed traps are. You should always drive the speed limit, but knowing where the speed traps are will help you avoid getting a ticket even if you do live on the edge.

Tuesday, October 29, 2013

So, I suck at blogging consistently. In my defense, it's been a tough month (but that's another story for another time). This post is a follow up to two previous posts. In the first post, I made an argument for bug bounties. My good friend Lenny Zeltser posted a response, making a couple of good points, which I addressed in a follow up post. But I failed to address Lenny's question, deferring that to a second follow-up. Unfortunately, that took almost a month to write. For the sake of completeness (and those too lazy to click the links), Lenny's comment/question was:

While some companies have mature security practices that can
incorporate a bug bounty program, many organizations don't know about
the existence of the vulnerability market. Such firms aren't refusing to
pay market price for vulnerabilities--they don't even know that
vulnerability information can be purchased and sold this way. Should
vulnerability researchers treat such firms differently from the firms
that knowingly choose not to participate in the vulnerability market?﻿

I addressed everything but the last question (I think) in the last post. But Lenny addresses a serious ethical concern here. Should we as security researchers treat firms differently based on their participation in (or knowledge of) the vulnerability market? There is an implied question here that may be difficult to examine: namely, how do you as a security researcher determine whether the firm has knowledge of a vulnerability market?

I would propose that one way to confirm knowledge is an explicit "we don't pay for bugs" message on the website. This implies that they know other companies pay for bugs, but they refuse to lower themselves to that level. IMHO, these guys get no mercy. They don't give away their research (their software), I'm not really interested in giving mine away either. Ethically, I think I'm good here to release anything I find (and fire for effect).

Generally, I treat any company with a disclosure policy (but no bug bounty) in the same category as those who simply refuse to pay. If you have a published disclosure policy, it doesn't pass the sniff test that you don't also know about bug bounties. Even if there's no explicit policy on paying (or not paying) bug bounties, the omission of this data in and of itself means that that you're not paying. Bad on you. Again, I argue for no mercy using the same "your time isn't free, why should mine be" argument.

In the two categories above, it's pretty easy to slam a company by using full public disclosure or third party sale. What about when neither of these conditions have been met? What sorts of disclosure are appropriate in these cases? Is third party sale of the vulnerability appropriate?

In my opinion, this can be handled on a case by case basis. However, I'm going to take the (probably unpopular) position that the answer has as much to do with the security researcher as it does with the target company. For instance, I would expect a large vulnerability research firm to exercise some level of responsible disclosure when dealing with a software company that employs two full time developers. I would hope that they would work to perform a coordinated disclosure of the vulnerability.

However, I don't think an independent vulnerability researcher with no budget has much motivation to work closely with a large software vendor that has no disclosure policy. If the software firm is making money, why expect an independent researcher to work for free? The security researcher may find himself in a sticky situation if the company has no public bug bounty. Does the company have an explicit policy not to pay for bugs? Is the lack of a disclosure policy just an oversight?

The independent researcher might prefer to give the vulnerability to the vendor, but also has rent to pay. In this case, should the researcher approach the vendor and request payment in exchange for the bug? This seems to be at the heart of what Lenny originally asked about. Clearly this is an ethical dilemma.

If the researcher approaches the vendor asking for money, only three possible outcomes exist:

The vendor refuses to pay any price (and may attempt legal action to prevent disclosure)

Two of these outcomes are subpar for the researcher. Assuming they all have equal probabilities of occurrence (in my experience they don't), the answer is already clear. Further, in the other two cases, the security researcher may have limited his ability to sell the vulnerability to another party. This may be due to pending legal action. In another case, enough details are released to the vendor to substantiate the bug that the vendor is able to discover and patch.

So my answer to Lenny's question is a fair "it depends." I'm not all for a big corporate entity picking on the little guy. But if the tables are reversed, sounds like a payday to me (whether or not the existence of a vulnerability market can be provably known).

Only one question remains in my mind: what if there is no bug bounty but because the attack space for the vulnerability is very small, there is also no market for the vulnerability? Well in this case, disclosure is coming, it's just a question of whether the disclosure is coordinated with the vendor. I don't have strong opinions here, but feel it's up to the researcher to evaluate which disclosure option works best for him. Since he's already put in lots of free labor, don't be surprised when he chooses the one most likely to being in future business.

Thursday, October 3, 2013

I recently wrote another post on the state of security vulnerability research. I discussed my reluctance (shared by many other researchers) to work for free. To that end, I encouraged the use of "bug bounties" to motivate researchers to "sell" vulnerabilities back to vendors rather than selling them on the open vulnerability market. One key point is that simply setting up a bounty program doesn't work unless the rewards are competitive with the open market prices.

I expected some whining from a couple of software companies about my refusal to test their software for free. I got a couple of emails about that, but what surprised me more was the response I got from a trusted colleague (and friend) Lenny Zeltser. Lenny wrote:

While some companies have mature security practices that can
incorporate a bug bounty program, many organizations don't know about
the existence of the vulnerability market. Such firms aren't refusing to
pay market price for vulnerabilities--they don't even know that
vulnerability information can be purchased and sold this way. Should
vulnerability researchers treat such firms differently from the firms
that knowingly choose not to participate in the vulnerability market?﻿

As luck would have it, I'm actually at a small security vendor conference in Atlanta, GA today. I polled some vendor representatives to find out whether or not they are aware of a bug bounty program for their software. I also asked whether they are aware of the vulnerability market. The results were fairly telling. First, let me say that this is not a good sample population (but was used merely for expediency). Problems I see with the sample:

These vendors self selected to attend a security conference. Most of them sell security software. They are probably more "security aware" than other vendors and therefore may have more inherent knowledge of security programs (vulnerability market and bug bounties).

The people manning the booths are most likely not app developers and probably not involved with the SDLC or vulnerability discovery.

The poll says that less than half of vendors surveyed are familiar with the vulnerability market and the vast majority do not implement bug bounties. To be fair, many were confident that being security companies they don't suffer from insecure coding practices. Therefore, their products don't have vulnerabilities and there's no reason to think about a bug bounty. Lenny's assertion seems proven correct. The organizations unaware of a vulnerability market probably aren't mature enough to implement a bug bounty. But some organizations are aware of the market, and yet they still don't want to implement a program.

I can only say that attitude is myopic at best. Practically speaking, if you don't have any vulnerabilities, then a bug bounty program costs you nothing. Why not implement one? You need a policy drafted, some legal review, a web page announcing the program, and some staff to respond to vulnerability reports (note: you'll need the last one anyway, so it's not an additional cost). I'd like to take the position that a bug bounty is never a bad idea. If you disagree, please tell me why. I'm serious about this. If you or your company does software development and you refuse to implement a bug bounty, please share your reasoning (post it here as a comment if you care to so everyone can see). If your reasoning is purely philosophical, I'm sorry to tell you I think that ship has sailed. I'd like to collect a sample set of reasons that companies either refuse to pay bug bounties at all or want to get by without paying market prices.

In my next post, I'll address the second part of Lenny's comment: should vulnerability researchers treat smaller, immature organizations differently than those who knowingly refuse to participate in the vulnerability market. Look for that post early next week.