Tuesday, March 29, 2016

Your website is hacked. Your website has been hacked - for a long time. Way too long for any "security" training/accreditation company. You probably know I have a generally low opinion of your CEH and CHFI certifications, but please don't let that stand between us.

Your handling of the security of your website is hurting our industry. If a professional licensing board for doctors were giving bogus prescriptions from their website for weeks, it would hurt doctors. Same thing for any professional group.

In the infosec profession, we started first by laughing at your misfortune of being hacked (again). It is sort of ironic when a security company gets hit - and even more ironic when the incident response sucks. Then some of us started a pool as to how long it would take before you cleaned the site. But now, most of us have just given up hope that you'll actually do something.

This is no longer funny. To that end, I'd like you to know that I can not longer stand idly by while you continue to serve malware from your website. Instead of continuing to laugh at you (which is tempting), Rendition Infosec is offering our assistance, free of charge, to help you clean your web server and stop serving malware. We can even help you investigate the original breach if you actually desire to do so.

Please seriously consider this offer. If you don't want us, for goodness sake, get help from someone.
You are making our industry look bad. Period. If you let Rendition clean your server, I promise not to joke about CEH for at least a month. It will be hard, but I can do it.

Earlier this week, I was asked if I had any opinion on the security implications of Apple's newly announced CareKit. Some of that made it in this WIRED article. But for the record, I figured I'd do something a little more comprehensive.

The idea of CareKit is noble and to be fair, I haven't looked at any of the apps that are being developed. The framework itself is not yet available to the public to examine. I'll come from a place of assuming that Apple has done a great job of locking the data down within the framework itself and that the data being transmitted to Apple is secure in transit and at rest in Apple's cloud. Given that, I see two primary potential issues:

The framework itself is secure but apps built with it are not. This is highly likely to happen, I've seen some absolute train wrecks in apps we've looked at. This is true even among those that store sensitive financial and health data. Once the user gives the app permission to access data from CareKit repositories, there's little way for Apple to control what the app developer does with it after it has been shared with the app.

The framework is adopted on a wide scale and users are de facto required to use a CareKit application to get an affordable insurance rate, etc. Data collected and stored within CareKit will be a gold mine for all sorts of civil and criminal litigation. I can't wait to get my hands on a copy of the framework to see what sort of data might be available with a subpoena. Users will be shocked what can be reconstructed from simple things captured with the accelerometer (how far they traveled on foot during a particular time for instance).

But if we are going to trust Apple with this data, I think this makes a very strong argument for keeping the data away from prying eyes. In other words, this makes a great case for wide scale encryption on the iPhone with no back doors. What data can be subpoenaed and what is the burden for law enforcement? Medical records are not easy to get under subpoena today, CareKit data should be no different.

At Rendition Infosec, we'll treat CareKit just like we do any other technology. That is we won't recommend it to clients without vetting the technology for security flaws. Assuming that someone else has done the due diligence is an absolute non-starter.

Monday, March 28, 2016

Famous 'hacker' Andrew Auernheimer (aka 'Weev') is back in the news again, this time for abusing Internet connected printers.

Weev created a script to send commands to Internet connected printers and caused them to print out an anti-semitic flyer. I won't post that cruft here, its offensive content is not relevant to the substance of this post.

What did Weev do?
He sent commands to thousands of printers in US IP spaces that had TCP port 9100 open. This port is commonly used by printers to receive postscript commands and print data. It turns out that it is also unauthenticated. Weev scanned for printers across US IP address ranges and sent data to the open ports he found.

Then, he laughed about it - publicly on social media. For a guy who went to prison previously for hacking, this isn't exactly a smart move. Many have argued that if he hadn't publicized his AT&T 'hack' (really a URL modification), he probably wouldn't have been convicted under our extremely broken CFAA.

Weev also seems to be taunting the FBI in his Twitter posts.

Weev's own account of what he did is here (caution: contains offensive language).

How are printers even directly connected to the Internet?
If IPv4 address space is exhausted, why do printers have public IP addresses? The answer is that many universities have obscene amounts of IPv4 address space. They also tend to lack firewalls. Some early Internet adopting companies have large IPv4 ranges as well, though they tend to be better protected.

At Rendition Infosec, we've worked with two companies in the last three years that have enough public IPv4 address space for all of their internal hosts. In these cases, we recommended emphatically that they use NAT and not give internal hosts public IPv4 addresses. With a public IPv4 address, you're one firewall misconfiguration away from having an internal host on the open Internet. Use NAT, even if you don't technically have to - you get a huge security benefit built in.

Is this illegal?
Weev has stated repeatedly that he sent commands to devices on the Internet that required no authentication and were waiting for his commands. Hence he has done nothing illegal.

This argument is ridiculous. Someone asked me last night on Twitter if leaving the printers accessible to the Internet gave Weev some sort of implied authorization. No, of course it doesn't. If that's the case, leaving your garage door open invites people to steal its contents. Note: neither is smart, but both are definitely illegal.

Could it have been worse?
This definitely could have been worse. The printers accepted PostScript commands. While PostScript commands normally just print text, they can contain device control commands as well. As far as we know, Weev did not attempt to exploit any PostScript parsers in the printers themselves (although many have vulnerabilities). PostScript commands could have been issued to cause the printers to go into endless loops of printing out garbage until they were rebooted.

What should I do?
Well, first if you are using public IPv4 address space for your devices, migrate to NAT. Next, inventory your systems. Impacted organizations failed SANS critical security control #2 (software inventory). Nobody in security would realize these printers were exposed to the Internet and not take some action. This doesn't just go for printers, you should know every port of every IP you have exposed to the Internet. Anything less is negligent.

No printers with public IP addresses? You should take this as a wake up call to check for your exposure internally. After all, a malware sample could easily take advantage of your printers. Social engineering might be easier with something printed rather than emailed. And there may be liability or reputation concerns with the pages that come off your printer. Take this opportunity to segment your network and close unused ports.

Saturday, March 26, 2016

The indictments that the DOJ has handed down for foreign hackers acting on behalf of their governments sets a REALLY bad precedent. First it was China. Now we have Iran. And it won't stop here. If it was politically expedient, the DOJ would indict a baked potato. Just stand by to see who's next.
But here' the wrinkle: the US government ALSO employs hackers that target other nations for a variety of reasons you can read about elsewhere. This is far from a secret. If the US DOJ wants to indict other countries' governments for criminal hacking of our infrastructure, go for it. But indicting individuals working at the behest of their governments? That's a bad idea.

And I don't really care if their government told them to do something the US government would "never" do... Because:

Many suspect there's very little the US government won't do given the right circumstances

If you think these hackers have a real choice when the government says "pull the trigger" you're delusional

So here you have hundreds of US employed hackers who signed up to do a mission. And sure, OPSEC will help them maintain their secrecy (or not, see OPM). At the time they signed up, they knew they could become the targets of foreign surveillance. What most didn't know was they would eventually face indictment from a foreign government - all because the DOJ wants to grandstand.

Perhaps Lando said it best...

Whoa - they volunteered for this

No, they didn't. End of story. They signed up to protect their country and serve - in many cases for a fraction of the prevailing industry wage. They work long hours, nights, weekends, and holidays. And overtime. Holy crap - did I mention the overtime?! If you think there's a skilled worker shortage in infosec, imagine if you had to get Top Secret clearances for all your people. Then imagine that they also have to work in a zero tolerance for mistakes environment. Yep. That's what they're dealing with (or so I'm told...).

They can't tell their families what they do. I've personally seen this dynamic end more than its fair share or marriages/relationships. Not because the couple isn't meant to be together, but because they don't understand how important the mission is that keeps the other away from home.

Scratch that. It's not that they don't understand - they can't understand. And not because their partners are idiots. It's because they can't be told how things work, what the partner does, and how just last night the partner gained access to a treasure trove of information that is reshaping national policy as they sleep the day away.

DOJ is screwing the pooch
The DOJ sets a precedent for other nations to charge our cyber operators for "just doing their job." Sure, sure, these fine men and women (some of the finest you'll EVER meet) took an oath to a lifelong obligation to protect classified information. But they didn't agree to have their lives rocked in ways they couldn't possibly predict. Luckily no other countries have yet followed suit indicting US government hackers. But if DOJ continues these shenanigans, it's only a short matter of time.

Our unsung heroes of national intelligence should be applauded, but at home that's just often not the case. I'll stand up to salute you, our national intelligence heroes. I'll even shake your hand. But while I'm shaking your hand, don't forget to keep looking over your shoulder - the DOJ is sticking a knife in your back.

Thursday, March 24, 2016

Oracle is releasing yet another Java patch out of cycle. Like all out of band patches, this one is rated critical. Like any good security professional, Rendition Infosec will recommend that you patch if you use Java. But we're going to take it a step farther and suggest that you start planning to migrate away from your Java technology portfolio altogether.

Whoa - we spent a ton on developing our Java project
So what. Admit it was a mistake and move on. We invest in technologies all the time that end up being complete failures. The US Navy thought Zeppelins were a good idea for aircraft carriers in the 1930's, but abandoned the idea when it was obvious that it was a failure (both airships crashed within two years of entering service). The US Navy didn't lament how they needed to keep building these obviously vulnerable craft since they had invested so much.

On the civilian side, nobody wanted to fly on a Zeppelin either after the Hindenburg disaster. But there was a huge investment in airship technology. The Empire State Building had plans for an airship docking station. Great idea in theory - you could "land" in downtown New York. But they scrapped the idea when it was obvious that:

Airship travel in general wasn't safe

Nobody wanted to travel by airship, especially if they had to dock at the top of a building

Aren't you overreaching comparing Java to the Hindenburg?
No. Not at all. Yeah, I know it's hard to see the people running away in flames when vulnerable Java installations are exploited. But they are there - believe it. Nobody shouts "the humanity" during mass exploitation either. But perhaps we need that to make decision makers understand the impact. I think that management needs some really powerful visuals to gain understanding. Otherwise they see their investment in Java timecard/inventory/HR/blah systems and are afraid to turn away. Give them a powerful enough visual to make it real.

But Java can be made safe if only we patch/remove serialization/blah....
Stop. No, seriously. Stop. I'm going to come back to the Hindenburg. After the disaster, those who had heavily invested in airship technology tried to talk about how it can be safe if only we take x number of precautions. But people realized those precautions weren't realistic, airships went away. It's time you do the same with Java. Start planning for how to migrate away. It won't happen overnight, so start planning now.

Your Zeppelin analogies suck and are really distracting
Bah! If you don't like that analogy, try SPARC. SPARC on the desktop died as DoD recognized that it was a sinking ship and migrated desktops to Wintel, despite their massive previous investment. Some of the technology they migrated to Wintel is still less responsive than it was on the SPARC platform, but overall the migration was still a huge cost savings for DoD. Bottom line: Java is not a sinking ship - but it will blow a giant hole in the hull of your ship and you're certain to take on water (or worse) in no time.

You're a moron and you're totally wrong
Think I'm wrong? Feel free to tell me about it in the comments or on your favorite social media network...

Wednesday, March 23, 2016

Folks, can we please stop with the Badlock conspiracies? They are so far off base, I don't even know where to begin. With a little critical thought, you too can figure out that Metzmacher didn't create and then subsequently "discover" the vulnerability in SMB.

On Vulnerability Naming
In the infosec community in general, we dislike those who needlessly name vulnerabilities. But there's some value there too. Managers and decision makers remember names better than CVE numbers. When you factor in that Microsoft and other vendors use their own set of numbers to describe vulnerabilities (that already have CVE numbers) things get WAY more confusing.

Heartbleed and ShellShock probably deserved names - they were wide reaching vulnerabilities and needed media attention to ensure they were patched as quickly as possible. VENOM? Not so much. I struggled to find servers vulnerable to VENOM. And the pre-announcement publicity on that was a clown show getting people needlessly worried. GHOST? Don't even get me started. Just because it's a bug doesn't mean it's exploitable in the wild.

Does Badlock deserve a logo and a name? If it's as serious as Metzmacher makes it out to be, then maybe. But only maybe. Why maybe? The bug is in SMB. That makes it Windows and Samba specific. Sabma isn't embedded in nearly as many products as bash and it's not clear whether exploitation will require an esoteric option setting in Samba. If the result is Remote Code Execution (RCE) in all versions of Windows and Samba by default then it deserves a name. Otherwise, meh, it's primarily a Windows vulnerability.

On pre-disclosure announcements
Two words: douche move. I taught SEC760 for SANS in London a few weeks ago and we talked about vulnerability disclosure a little. At the time, I said that the two main types of disclosure were full disclosure and responsible disclosure. Metzmacher has officially added a third main type. I'm naming it douche disclosure.

What is douche dicslosure?
Douche disclosure occurs when you have coordinated a vulnerability disclosure with a vendor. The vendors agree on a patch release date. Then you, the researcher, begin talking about the vulnerability in the media weeks before the patch date. You get extra douche points for naming the vulnerability, creating a logo, and publicizing this. For maximum douche points, you ensure that the name of the vulnerability points other researchers to the location of the bug in the underlying protocol, weeks before patch release.

Back to "The Metzmacher Conspiracy"
I'll admit that Metzmacher is an easy guy to dislike. But the idea that he somehow introduced the bug - as suggested by some on Twitter - is ridiculous.

The thing people here need to remember is that Badlock reportedly also impacts Windows. Unless you are suggesting that MS is stealing code for SMB from the Samba project, then the vulnerability must have been introduced in Windows and ported to Samba. Not the other way around. Period.

The question then is this: knowing that the bug is out there, assuming that it is triggered through locks (possibly file locking?), and the patch is weeks away, can another attacker duplicate the vulnerability? I think so and I'm not alone.

Dave is a smart guy, but let's drop "if motivated" from his tweet. Attackers are of course motivated to discover this bug before the patch. If this is unauthenticated RCE over SMB, then this would be a seriously wormable vulnerability (like Conficker).

What should I do?

First and foremost, stop the hype. It's silly. When you are done speculating about what it might and might not be, think about your network segmentation. Early tweets on the vulnerability said everyone on the same LAN could have administrator privileges.

Don't allow SMB or NetBIOS where you don't need it. Layer 3 ACL's make a ton of sense here. So do client firewalls. Let's assume this will be written into a worm. If so, it's important that you block SMB leaving your network (TCP ports 135, 139, and 445 should all be blocked at your boundary firewall at a bare minimum).

What about private VLANs? We regularly recommend private VLANs to clients we work with at Rendition Infosec. While they can pose some initial configuration challenges in some environments, we find that those environments are usually poorly architected with workstations doing jobs much better suited to servers.

Finally, planning for the likelihood that this is more than hype, set aside some extra time to test and apply patches. Note, I said test. SMB runs in kernel space. A bad patch here will result in a blue screen. Period. Not a good place to be. Also, don't forget that you may have to patch more than once, especially if MS releases a rushed patch out of band because someone releases an exploit.

Saturday, March 19, 2016

Most of the time, infosec is a pretty safe job when it comes down to it. You're risk profiles normally involve things like weight gain from too much desk time, carpal tunnel syndrome, and the possibility of back problems from sitting at a desk too long hunched over a keyboard. The worst most of us have to worry about is a disgruntled sysadmin trying to run us down in the parking lot after we deliver a pentest report.

But things aren't so great in Bangladesh. You may have read last week that while the government there lost $81 million in a cyber attack, they came really close to losing one billion dollars. It was a simple typo that took the criminals down - they misspelled the word "foundation" and that seemed really odd to someone operating the SWIFT transfers.

While the attentive employee should be lauded, many infosec professionals have criticized the Bangladesh government. In fact, the governor of the Bangladesh Central Bank resigned after the incident. A few other high ranking government employees involved in the incident went with him.

Much of this is in response to lax security standards at the bank, which were called out by a number of different infosec professionals. One of the most vocal local infosec professionals was Tanvir Hassan Zoha. Unfortunately, he has gone missing after he was accosted from an auto rickshaw in Dhaka, Bangladesh.

Stories like this really make me appreciate my freedom and my safety. When it comes right down to it, people may disagree with me. They may say mean things on social media (and often do). But nobody has ever planned to kidnap me (or worse) for any infosec related opinions I've communicated. Of course I'm happy about that - that's the way things should be.

I'll hope and pray for Zoha's safe return. In the meantime I'll count my blessings that while there may be haters in infosec, I generally don't risk any bodily harm for my opinions.

Thursday, March 17, 2016

Not that there was ever any doubt, but the FBI crusade against Lavabit is conformed to have been aimed at obtaining information on Edward Snowden.

Hundreds of pages of previously sealed court documents were released. However, the pages were heavily redacted. But there was a single reference to Snowden that was missed. So now we have proof that the FBI caused the shutdown of an email service used by many over a single email account.

However you feel about that, there's a clear infosec angle here. If you are redacting something, you have to make sure you do it right. If Snowden's name and/or email were supposed to be fully redacted (which is almost certain based on the redactions) a simple keyword search would have confirmed that the documents no longer contained any references to the redacted subject. No matter how you feel about DLP, you have to admit that it would have saved the FBI some face here. Redacting 99% of the references doesn't really matter if you miss one - close only counts in horseshoes and hand grenades.

Another possibility (besides incompetence) is that an insider redacted the documents and decided to leave in a single reference. This offers the redaction professional plausible deniability as to their intent, but still gets the name out there.

Friday, March 11, 2016

Many organizations today are addicted to insecurity and feel trapped, dare I say helpless to break away from their insecure practices. At Rendition Infosec, we see this often and have put together a helpful 12 step program to help organizations break away from their addiction to insecure practices.

12 Step program for recovering from information insecurity addiction

Admit that information security is cost center for the organization, not a profit center. Understand that in order for information security initiatives to gain traction, we will need to engineer secure practices that align with business operations.

Come to believe that you can and must have endpoint visibility to succeed in network defense. Recognize that asset management software does not offer true endpoint visibility for security threats. Understand that capital investment will be needed to detect today’s threats on our endpoints.

Make a decision to turn over all of our logs to a central repository (SIEM) for aggregation and analysis. Because we can’t analyze what we don’t log, we will enable detailed process tracking on Windows and process accounting on Linux for we know this frustrates attackers.

Create and maintain inventories for all physical network devices and the software loaded on them. We cannot protect and scan that which we do not know about.Admit to our executives that mistakes were made in architecting the network as it exists today.

Commit to architecting our networks in a way that provides for segmentation. Where possible, we will use layer 3 access control lists, port security, and private VLANs to minimize lateral movement.

Admit that continuing to do things “because that’s the way they’ve always been done” is harming the organization. We must limit privileged group memberships, remove local administrator rights, set account lockout thresholds, and minimum password length of greater than 8 characters.

Humbly ask users to report information security failures without fear of reprisal. Similarly, we will ask our systems administrators to tell us where the proverbial bodies are buried so we may begin to undo the many sins of the past. Our system administrators shall not fear reprisal for their old insecure ways, for they did not know better and acted at a time when our organization was in the grips of an information insecurity addiction.

Make a list of systems and processes that are fundamentally insecure as they exist today (audit your network and perform continuous vulnerability assessment). Prioritize this list for remediation using a risk based assessment methodology. Agree with the executives on a timeline for remediation and ensure that budget and manpower are allocated.

Continue to find new risks and insert these into the risk based remediation model created earlier. Recognize that without a change management process, we will have limited visibility into network changes and will always be in reactive rather than proactive in our activities.

Accept that we will fail in some of our information security activities. We will always have some risk in our organization. Recognize that for our executives to understand the risk we must communicate in a language they understand, not techno jargon (which might as well be Klingon).

Seek through a comprehensive patch management process to ensure that all commercial/FOSS software is patched against known vulnerabilities. For that software which we have developed in house, we will aggressively test to ensure that trivial vulnerabilities do not exist as they would be horribly damaging to our business.

Having had a spiritual awakening, hunt aggressively through the network for attackers that have already penetrated our defenses so that we may share with them the good news of our incident response process. We know that we cannot rely on third party notification and cannot afford the brand damage and lost productivity of a failed attack remediation.

You may note that these were closely written to model the 12 step programs for breaking other addictions. This is not a coincidence, nor is it designed to make light of those programs. Based on my experiences, I sincerely believe that many organizations are truly addicted to insecurity. Just like many addicts want to break their addiction, so do members of these organizations. However, for varying reasons, they find they cannot. Sometimes this is because they lack organizational support. Sometimes it is just a matter of getting executive engagement. In every case, transitioning from insecure practices to their secure counterparts requires a plan. Hopefully this provides you that plan, or at the very least a good starting discussion point.

Tuesday, March 1, 2016

In SANS NewsBites yesterday, I suggested that readers get involved in cyber security policy, especially as it applies to the Wassenaar agreement. I've written on this specific topic previously. However, one reader sent a question asking how to get involved at a more general level. I figure that if one person took the time to actually send an email, then there are probably many more out there with the same questions who just don't take the time to ask. So here goes...

For Wassenaar specific issues here are some suggestions:

Take the time to talk to friends, family, and community leaders in your sphere of influence about the proposed agreement. Many times all they hear is a sound bite. Something like "new regulations seek to limit the damage done by hackers!" To most, this seems reasonable. But it's not the whole truth. Educate those in your circle and turn them into ambassadors.

Take a few minutes to call your elected representatives. Don't email. Talk to a staffer and ask for the representative's position on Wassenaar. This is not a common talking point, so they are unlikely to have a position at all. This is your opportunity to educate. Believe it or not, most representatives I spoke with at both the state and national level had no idea what the Wassenaar agreement was. The one that did, had no idea that it had anything to do with cyber. While I know state legislators don't directly influence national policy, they often have more influence than you might think.

During the initial round of talks, the government (specifically BIS) requested comments on the draft rules. Boy did they get comments. Lots of them. Many well formed arguments from hundreds (or thousands) of professionals and many industry groups. Keep your ear to the grindstone for opportunities like this to get involved. If you think you have nothing to contribute, you are mistaken. Even if you make the same argument someone else did (and even if you do so less eloquently) your voice still strengthens our collective position.

In general, I recommend that every infosec professional is associated with (or follows news from) at least one industry group/club/etc that lets them know when important issues like these are taking shape. Obviously, keeping up with SANS NewsBites is a step in the right direction there. Another such group I strongly recommend is the I Am The Cavalry movement.

Ultimately, it's our job to keep our leaders informed. Scratch that, it's our duty. Legislators don't pretend to understand medicine on their own, they seek advice from doctors and other medical professionals. Cyber security is moving at such a rapid pace that many lawmakers mistakenly believe they know what is best without consulting experts in the field. Perhaps this is because the technology somehow feels more accessible than medicine. Whatever the case, if we don't help lawmakers craft good legislation, we have nobody but ourselves to blame when they do it poorly.