Saturday, January 28, 2017

Update: According to @cybergibbons he checked with the hotel in the story and found out the story was fake. That's disturbing. So we won't be using this as a cautionary tale. We also won't be using this as a cautionary tale. Locking people in their rooms seemed far fetched, but locking people out of their rooms is still a huge (and believable) risk. I'm going to leave this post up for the time being. We've seen backdoors left behind by cyber extortionists before. We think it's wise to segment networks. For those reasons alone, we think the post (though originally based on fiction) is worth keeping up.

Original post
I read a story today about a cyber attack causing safety issues, or more specifically a threat to human life. The attackers took over the key management system at a hotel with 180 guests and locked guests out of their rooms. Supposedly, even the guests in their rooms couldn't get out. This is an obvious safety issue for guests.

The hotel paid a ransom in Bitcoin to restore service. The attackers only asked for 1,500 EUR, but honestly could have probably gotten far more given the seriousness of the mayhem they were causing the hotel.

A more important note is that attackers left a backdoor in the hotel's system and tried to come back. It's not the first time the hotel has been attacked. The hotel has been attacked at least twice before, though there are no details about the previous attacks offered in the article. The hotel management also noted that they've been in contact with other hotels that have had similar ransom situations.

Takeaways

There's some interesting takeaways here. First, if you need an example of a cyber ransom attack causing a possible threat to human life, here you have it. I'll certainly be holding this one in my back pocket for future discussions with Rendition Infosec clients. The possibly liability here should be obvious (it's enormous).

Perhaps a more important takeaway is that the attackers planted a backdoor in the hotel's systems. I don't disagree with the idea of paying a ransom. Do what you have to do to ensure safety. People locked in their rooms are a fire hazard. People who can't get into their rooms to get life saving medication are also obviously at risk. So paying the ransom is the right thing to do. But after you pay the ransom, organizations should aggressively hunt for attackers on the network. Machines that have been compromised cannot be reliably cleaned of all backdoors and malware. Best practices require that the systems be rebuilt (not restored from backup).

After rebuilding the computer systems the hotel decoupled some of its systems from its core network. This is very similar to best practices in ICS (industrial control systems) networks where IT (information technology) networks are separated from OT (operational technology) networks. There's really no reason for the hotel's key control system to be on the corporate network in the first place. Only bad things can happen from this extra connectivity. It's worth noting that "decoupled" could mean any number of things. We can only hope that the system is truly separated from the corporate network.

Finally, the article says the hotel will be replacing electronic keys with regular keys. This creates a whole new threat model, but the good news is that real keys never get demagnetized (as a traveler, I hate this). The hotel will have to evaluate whether replacing digital keys with physical keys is best for the safety of its guests. But this is a good note of how maybe we shouldn't connect everything to the Internet (yes, I'm looking at you IoT).

Conclusion

This is a great educational story that could have ended very poorly. Instead, the hotel responded quickly and took steps to keep it from happening again. They pulled victory from the jaws of defeat. I'm sure there are some who will say the hotel was wrong for paying the ransom, that paying encourages the attackers to target other victims. But it's unlikely that those making these claims have ever faced such a situation.

Thursday, January 26, 2017

I read this hilarious Motherboard article the other night about a witch who claims she can use magic to drive out computer viruses. Sure, this article is humorous (at least to infosec people reading this blog). But I got to thinking that there are many products and services being seriously marketed in infosec that are no more effective than magic. In fact, on more than one occasion, a vendor has described a process to me as "nearly magic." Um, no. You've lost my attention. Go sell your magic beans to someone else. I don't need your beanstalk screwing up my security architecture.

I'm not sure which security firm (see what I did there?) will be the first to offer WaaS, but whoever it is would be wise not to take the advice of the real witch.

The witch says she "called in earth, air, fire, and water" to aid in clearing a virus. But every A+ technician working the bench at Geek Squad knows that at least three of those things are bad for computers. Something is already a little fishy about this technique.

Looking for the root cause? Feel "a snag"
However, she claims to be able to find where a virus got in. It's where she feels "a snag." I doubt I'll be using witchcraft in any Rendition Infosec incident response in the future. But the next time a lawyer asks me if I've explored every option, I'll point them to this article (all the while hoping they don't call in a witch or a psychic to uncover the logs that rolled over 6 months ago).

A little further into the interview, the witch is asked about demons infecting computers. I'm pretty sure the interviewer meant to say "daemons" instead of "demons" and everyone just got confused...

Let's get serious for a minute
This has been fun (for me at least), but let's seriously talk about the logical fallacy that she uses to deal with those who discount her work. She essentially says "before you can challenge me, you must first read The Spiral Dance" (paraphrasing here). She discounts the naysayers, saying "they say incredibly stupid things" and appears to presume it is their lack of knowledge that causes them to question her magic. This is an example of the "tu quoque" fallacy. Rather than addressing the merits of the argument, you say "your argument is ridiculous" and dismiss it.

Unfortunately, I see this approach used in infosec far too often. Infosec professionals may assume that those they are arguing with lack knowledge to "see the light" as it were. Sometimes this is true, but I caution you to use this (hopefully) humorous example to learn about logical fallacies so you can avoid them in your own work.

Wednesday, January 25, 2017

There's some shocking news out of Russia this morning that the head of computer incident investigations at Kaspersky Labs was arrested for treason. According to this article, he was arrested in December and his arrest may be linked to another arrest in the FSB around the same time.

Update: Forbes is reporting that the charges stem back to an investigation into the deputy head of the FSB's information security center (CDC) Sergei Mikhailov. Moscow Times reports that the arrest was related to taking money from foreign sources and also draws a connection to Mikhailov and the CDC.

According to this source there were changes to Russia's treason laws in 2012 that include the following definition for treason:

providing financial, technical, advisory or other assistance to a foreign state or international organization . . . directed against Russia's security, including its constitutional order, sovereignty, and territorial integrity

It's worth noting that the definition above is very broad and could meet the definition of publishing (or attempting to publish) information about a Russian state sponsored hacking group. This would definitely be "technical assistance" and would definitely aid "a foreign state." Also, it's pretty easy to see how that would hurt "Russia's security." All the definitions are met here for a hypothetical scenario where a security researcher in Russia could be charged with treason for "outing" a Russian state sponsored hacking group.

Where is Russia on this?
The GRIZZLY STEPPE report made it clear that Russian state actors were involved in attempting to manipulate the US elections. The arrests happened in December shortly after the elections, but it would be an illusory fallacy to assume that the timing of the arrest is connected to the elections. However, this an obvious connection that many will jump to (including several members of the press who have called me today). The FSB is smart enough to know this connection will be assumed. If they wanted to get out in front of it, they could. But they haven't. I assess that whether the timing is connected, the FSB is comfortable with people assuming that it is (or at least raising the question).

Keep up the good fight
In any case, today I thank my lucky stars that I perform incident response in the United States where the government doesn't overtly try to suppress my freedoms. That's not to say I don't have a healthy fear of our government when it comes to publishing security information (more on that in a later post). But I seriously doubt that the US government would charge treason for investigating an incident involving our own network exploitation assets. On the contrary, I feel pretty confident Russia could.

For those living and working under oppressive regimes, keep up the good fight. But also remember that no incident response report or conference talk is worth jail time (or worse).

Also, to the GREAT researchers at Kaspersky Lab (I love your work), I hope this incident doesn't in any way tarnish your reputation. The actions of one individual should not be a measure of the group.

Tuesday, January 24, 2017

You may have heard that the sale of Yahoo to Verizon is being delayed. This is obviously bad news for Yahoo. But honestly, it's probably great news for infosec.

At Rendition Infosec, we've worked a fair number of breaches over the years involving new organization acquisitions. In every case, the acquiring organization failed to perform good due diligence on the purchased organization. They certainly did a financial audit, but failed to perform a security audit. The value they paid to acquire the organizations was in every case too high, since the price was calculated without knowing about an ongoing breach.

So is the case with Yahoo, only the deal isn't complete yet. You can bet that Verizon will pay less for Yahoo than originally planned if the deal goes through at all.

However, this isn't the case with most acquisitions. In most cases, the purchase is complete before the breach is discovered. And unfortunately, the purchasing organization is left holding the bag in these cases. They paid more for an organization than it was worth and likely have buyer's remorse. For smaller acquisitions they might also spend more on the incident response, breach notification, and reputation damage than they paid to acquire in the first place.

Then there's the very real concern that the smaller organization is being used as a compromise vector for the acquiring organization (which likely has better security). We've seen evidence strongly suggesting this has happened in at least one case (and circumstantial evidence for other cases).

Given the importance of cyber security in today's marketplace, M&A teams would be wise to use threat hunting from external teams as a resource. The cost of threat hunting, while not cheap, is far cheaper than making a bad purchase. We anticipate that contracts can be structured such that if compromise is found, the acquired organization pays the bills for threat hunting. Even if this isn't the case, the cost is cheap insurance for the acquiring organization.

If I'm reading the tea leaves correctly, this means more threat hunting jobs by external teams. It should go without saying, but internal teams are certainly not what you want doing threat hunting for this purpose. All in all, this is great for infosec, particularly firms that specialize in DFIR. I'd be remiss not to mention to you that Rendition Infosec provides these services using our own internally developed proprietary hunting software. But whether you use us or someone else, don't acquire another organization without the due diligence of threat hunting.

Saturday, January 21, 2017

Over the last several years, I've noticed the role of CIO transitioning increasingly to business experience rather than IT. Likewise, the CISO position is often given to those lacking any hard skills in security.

I'm not trying to reignite the hard vs soft skills debate, but I do think it's worth discussing how important security understanding is to being a CISO. One organization I ultimately opted not to work with hired their "CISO" from within their app development ranks. You can't convince me you're serious about security if this is your plan. If you want to put someone on a career development track to eventually be CISO, fine. But an application development manager is no more qualified to be a CISO on day one than I am qualified to take over control of a 747 mid-flight.

At Rendition Infosec we regularly meet with infosec leaders - some have a wealth of domain experience and others unfortunately do not. We're happy to help organizations of all shapes and sizes with directors from all backgrounds. However, we know that we have better outcomes when dealing with leaders who really understand the problem infosec space.

Some will argue that a great leader can be a great CISO by surrounding themselves with good people. I think this is fundamentally wrong. Honestly, I doubt this works really well in any field where you lack general domain knowledge. You could surround yourself with great electricians, but you'd still be a bad chief electrician if you didn't understand panel load or ohms law. And sorry if I offend any electricians in saying this, but information security is way more complicated. In other words, more domain knowledge is needed.

Armchair Experts

But infosec lends itself to armchair experts. It's in our very nature to over estimate our capabilities, especially on topics we find partly familiar. For better or worse, we all use smart phones and computers these days. So perhaps we feel like we already have more domain knowledge here than say, pipe fitting. But this is deceptive since the problems of infosec are hugely complex and require massive amounts of domain knowledge. My ex-father in law was a doctor - a legitimately smart man. He had no problem paying a plumber to fix a leaky faucet or even a painter to paint a wall. But when faced with a computer problem, he couldn't get over the idea that he was paying someone to do something he could fix himself. He couldn't understand the value because he overestimated his domain knowledge for cyber security.

Is this really a problem?

You bet it is. A great example of this is Giuliani. Say what you will about the man, he has a lot of leadership experience. He's also reasonably adept at talking to the press. So I was a little shocked to read this Market Watch article where he totally confused about cause and effect on a topic as simple as Y2K. As far as Giuliani was concerned (as of January, 2016) we digitized systems to deal with Y2K.

Obviously this is wrong, and hideously so. But it highlights the type of messaging that can occur when leaders overestimate their domain knowledge in infosec. So what, we're never going to have another Y2K you say? True. But we will continue to face complex security challenges and others will assume you know what you're talking about. Think I'm wrong? Market Watch didn't question Giuliani's response at all and continued giving him airtime.* The Giulaini reference isn't meant to be political in any way. It just happens to be the best high profile example I can muster of leaders thinking they know more than they actually do about infosec.

Just get the checklist done
I've regularly meet with infosec "leaders" (and I use that term lightly) who proudly present me with their list of things they will do to "finish securing our networks for good." Whoa - no professional believes they will simply check a few boxes and be done. That's insane. 100% delusional. Unfortunately, directors and executives eat this stuff up and are convinced that soon they will be "done solving the information security challenge."

Infosec is a domain that requires professionals at the helm. I don't want someone with a MBA doing hip surgery because he "understands business and has a hip he uses every day, so you know, he's qualified." We need to stress the importance of understanding the security domain to our leaders. If you already have a leader who doesn't understand the domain, you have two fundamental choices - either educate them or get a new job. Sadly, usually the latter works best.

Thursday, January 12, 2017

Who are the Shadow Brokers? Are they nation state? If so, are they Russian government or Russian government sponsored?

The timing of the latest releases certainly makes that seem likely. Along with the release of the GRIZZLY STEPPE report detailing Russian hacking, a number of Russian "diplomats" (probably spies) were kicked out of the US. Apparently we also took their summer vacation home.

One week later, the Shadow Brokers released a dump including a file listing of Windows tools supposedly stolen from US intelligence agencies. They also posted screenshots detailed here and here. It's hard not to see this as a retaliation for the US expelling the Russian diplomats. If it's not a retaliation, make no mistake about it: the Shadow Brokers knew that analysts would likely come to this conclusion.

But then in the early morning of January 12th, 2017 the Shadow Brokers dumped 61 Windows binaries (.dll, .exe, and .sys files). They claim they only dumped the 58 tools that were detected by Kaspersky AV, but the dump contained 61 files. A little anonymous birdie told me that Kaspersky only detects 43 of these files as of mid-day on the 12th. I don't like Russian software on my machines so I can't confirm whether or not that's true.

Shadow Brokers "final message"

So why dump the actual files themselves? I think that since the dump of the filenames on Sunday there's been a lot of behind the scenes diplomatic talks and Russia decided the US wasn't taking them seriously. In this case, releasing 61 files is a good way to be taken seriously, while holding back a huge cache of files. "Feel some pain, but know we can hurt you again and again and again."

Of course, I could totally be wrong about this, but it sure is fun to watch what appear to be two country's intelligence agencies battle it out in public.

Wednesday, January 11, 2017

This message I got on LinkedIn today is a great example of how NOT to recruit infosec candidates. Nothing about this message says "I actually looked at your profile." Everything about this says "I never looked at your profile, but some word on there matches a search term."

Want to get serious about recruiting? Talk salary. Talk benefits. Tell me something that says "I read your profile." Be careful about targeting business owners. Think before you just send. Maybe ask yourself:

What am I offering that would make this job attractive to a business owner? So attractive they'd leave the business and come work for me?

If you can't answer those questions, think again about even sending your offer. When I make successful sales at Rendition Infosec, I research the organization I'm selling to. Honestly, infosec/cyber recruiters need to start doing the same.

Tuesday, January 10, 2017

I was working on a piece of malware for a Rendition Infosec client recently and noticed a novel malware sandbox evasion. Malware often tries to determine if it's in a sandbox and if so, performs different functions than when it is on an endpoint system.

This particular malware enters a loop and tries to connect to www.google.com. If the malware connects successfully, it goes on and does bad things. If not, it sleeps and does it again. And again. And again. Good news for sandbox evasion: until the malware successfully connects to Google, there's no way that you'll see anything bad. For this (and other) reasons, this malware had really low detection and had no trouble bypassing antivirus on the client's system.

The attacker knows however that tools like FakeDNS and a simple HTTP server could easily trick the malware into thinking it was on the Internet. But here the attacker reads the data returned and checks the first four bytes of the return to find "<!do". This string is likely the "<!doctype html>" tag that is found at the start of the Google website (and others). I checked a few sandbox programs that try to mimic the Internet and most of them just serve up an HTML page without the "<!doctype html>" tag. I'd recommend adding this to your sandbox program if your sandbox is configurable.

This is a great time to remind everyone that sandboxes are useful tools but are no replacement for a good reverse engineer. If you don't have a dedicated reverse engineering staff but would like to have the capability at your disposal, talk to us at Rendition Infosec and we can get you up and running on a retainer quickly. Once you have a reverse engineering capability at your disposal, it's pretty amazing how much you'll actually use it.

Monday, January 9, 2017

Yesterday, I blogged about the Shadow Brokers dump and some take aways. I wanted to introduce another potential takeaway. One of the lines in this screenshot published by Shadow Brokers says psp_avoidance. What is Psp_Avoidance? Is someone looking to avoid the Playstation Personal? Paint Shop Pro? Doubtful...

I downloaded the screenshots published by the Shadow Brokers (which oddly doesn't include this screenshot). However, it does include the output of the find command across the dump. After searching through the directory list output for the string "psp" we find a number of different XML files (among other Python files and others). Note the output below.

We have no idea what a pspFPs is, but what we see here seems to indicate that psp is a security product. We also get some idea of what antivirus products are of interest to the group Shadow Brokers stole the tools from.

This additional find command output data seems to support that psp is nomenclature for security product.

A few Google searches later with, this one with the obvious terms "psp computer network operations" we get back as the fifth result this wonderful page from ManTech. It details the ACTP CNO Programmer Course. The course documentation indicates that PSP is an acronym for "Personal Security Product."

Thanks ManTech!

So, circling back around, what is Psp_Avoidance? Obviously, we don't know - but if the acronym is correct, it would seem to be software built to evade personal security products, which directory listings suggest (as does ManTech) are antivirus programs.

Should you run antivirus products? Sure. At Rendition Infosec we tell customers that operating without AV is like driving a car with no airbags. But this dump suggests that advanced attackers have mitigations for antivirus products - a sobering reality for organizations without defense in depth. Bottom line, AV is valuable but the new dumps casts a shadow on the effectiveness of antivirus against APT attackers.

Sunday, January 8, 2017

Shadow Brokers are at it again, this time offering apparent Windows exploits and toolkits. The timing of this does not seem coincidental. If Shadow Brokers are to be believed, they've been holding the tools for some time and just now releasing the Windows toolkits. Previously, they have released other tool sets, but nothing that operated against or exploited Windows.

The Tools
What of the tools? There's little specific information about the tools, but I've included some images here from Twitter. In the last I've embedded tweets, but when accounts get suspended, the tweets are no longer available. It's at least plausible that this account will be suspended...

This screenshot shows the price for individual components. Most interesting perhaps is the fact that the exploits contain a possible SMB zero day exploit. For the price requested, one would hope it is a zero day. The price is far too high for an exploit for a known vulnerability.

This screenshot shows a number of names of apparent tools in the dump. Of particular interest are the version numbers. Note that most of the tools have apparently been through multiple revisions, adding apparent legitimacy to the claim that these exploits are real. Though another screenshot hints at a possible zero day SMB exploit, there's no indication of which exploit names involve SMB (or any other target service).

The exploits named "touch" in the screenshot do however seem to offer some ideas of services that might be interesting. Of particular interest is WorldClientTouch - suggesting that perhaps one of the code-named exploits work against MDaemon's web based email client?

Finally, this screenshot seems to show some information about the tools available. Some capabilities like "GetAdmin" and "PasswordDump" seem rather obviously needed capabilities.

However, the listed plugin "EventLogEdit" is significant for digital forensics and incident response (DFIR) professionals investigating APT cases. While we understand that event logs can be cleared and event logging stopped, surgically editing event logs is usually considered to be a very advanced capability (if possible at all). We've seen rootkit code over the years (some was published on the now defunct rootkit.com) that supported this feature, but often made the system unstable in the process.

Knowing that some attackers apparently have the ability to edit event logs can be a game changer for an investigation. If Shadow Brokers release this code to the world (as they've done previously), it will undermine the reliability of event logs in forensic investigations. Cyberark recently claimed that event logs might be subject to tampering, though it doesn't appear that they were discussing the Shadow Brokers capability specifically.

The Timing

So what do we make of the timing? It's hard believe that the timing is purely coincidental and has nothing to do with the release by US intelligence about the Russian hacking of the DNC.

The theory that immediately comes to mind is that Shadow Brokers are Russian or Russian operatives and the release of the Windows toolkit is retaliation for the report. Unlike previous dumps, this dump goes a bit further, showing screenshots of the GUI tools and execution of some scripts. However, it is important to note that no tools are offered for proof of the dump this time. Only screenshots and descriptions of the tools are offered.

An alternative theory is that Shadow Brokers are not Russian and are timing this release to shift the blame to Russia. There's unfortunately no way to test this theory.

Finally, there's the theory that the timing of this latest Shadow Brokers release has nothing to do with the intelligence community report. This seems the least likely. Shadow Brokers must have known that people would make this analytic leap, so even if they scheduled this release some time ago, the decision to go ahead given the release of the report on Russian hacking was done with the understanding that connections would be made.

Conclusion
Regardless of your feelings on timing or what we know of the tools themselves, this is certainly an interesting development.

Friday, January 6, 2017

At Rendition Infosec, we're trying to put some science into Cyber Threat Intelligence (CTI). We're really interested in how customers are using the GRIZZLY STEPPE (Russian APT) report. We'll publish a report on the responses (in aggregate) in the coming week. Please make your voice heard and contribute to the community's understanding of how this data was used in the real world.

Wednesday, January 4, 2017

The Joint Activity Report (JAR) on GRIZZLY STEPPE did far more harm than good. I've had numerous clients of Rendition Infosec question me on what the indicators mean and whether they should be concerned.

Concerned about Russian hackers in your network? Not based on those indicators (most of them).

Concerned about the competence of government cyber analysts (or lack thereof)? Yeah, definitely.

There are 876 IP addresses in the GRIZZLY STEPPE IOCs. There are several from Amazon EC2, and absent a date of when those IPs were actively used by Russian hackers, they are useless. Less than useless.

My favorite IP address in the report though has to be 65.55.252.43. This resolves to watson.telemetry.microsoft.com. This makes it clear that nobody competent vetted the report. Either that or someone at NCCIC has it out for Dr. Watson.

What's an indicator anyway?
These indicators aren't indicators. To be more than data, an indicator has to indicate something. These fall well short of that. The report thankfully doesn't recommend blocking the IPs in the report, but also fails to say how hopelessly under-vetted they are.

My recommended action for these indicators is to ignore them until they have been better vetted. NCCIC honestly owes a large number of network operators and incident response teams a formal apology for the time they wasted responding to this farce of a report. The IOCs have triggered countless false positives in my customers' networks. Even Rob Graham noted that he had two of the IPs in his browser DNS cache. But more than that, the report communicates to corporate leadership to ignore future reports from NCCIC. One day they'll have something useful to share, and based on this clown show, nobody will be left paying attention.

Monday, January 2, 2017

In this post I want to address a problem that many CTI (Cyber Threat Intelligence) teams encounter on a fairly regular basis. CTI teams rarely deliver good news. After all, they are delivering information about cyber threats. The news is rarely great and in less enlightened cultures, it really isn't what leadership wants to hear. At Rendition Infosec, we are regularly asked to sugar coat reports to make them more palatable to leaders. Now I'm not one for FUD (fear, uncertainty, and doubt), but I'm also not one for ignoring the truth. And often, unfortunately that truth is "we need help." So in this post I'd like to address the question of whether it's better to tell them what they want to hear or sugar coat the truth. To help illustrate the point, I'll use a CIA review of the book "What Stalin Knew" that I came across recently. If you haven't read this review already, you should.

Tell them what they want to hear
Telling leaders what they want to hear is usually the easiest solution in the short term, but it can cause real problems in the long term. "We're doing great on security and don''t have anything to worry about" is all fine and good until you have a security incident and have to explain why you were wrong (or deceitful). However, this approach can increase liability if you are a contractor. For internal employees, bear in mind that there are often sacrificial lambs brought to slaughter for every major security incident. If your message is consistently "we're fine, don't worry" you may be that lamb.

Tell them what they need to hear
As pointed out in the book, this can get you killed while those who sugar coat the truth (or simply omit annoying facts) may prosper in your place. Now you aren't likely to be killed for telling the truth, but you may not be promoted and might be marginalized in your existing position. If you are a contractor, you might not be invited to return. But the good news is that this approach reduces liability and you'll probably sleep better at night doing it this way.

Take a blended approach
I personally think this is the best approach. Executives and information technology professionals suffer under intelligence fatigue. They need actionable intelligence to make decisions and operate effectively, but too much non-actionable information isn't a good thing. At Rendition, we'll happily provide full details of all intelligence available as well as all recommendations for fix actions. But we really prefer to focus on the top three to five threats and the top five to ten remediation actions. We find that in numbers above these, we're really over saturating executives and exceeding the ability of IT organizations to take action on the remediation actions that are presented. We carefully work with the organizations to see their progressing in actioning the intelligence provided and then present the next most pressing threats and remediations.

What's the best approach?
What are your thoughts on the approach that CTI teams should take? Continue the conversation on Peerlyst, leave a comment here, or hit me up on Twitter.

Sunday, January 1, 2017

I'm not touching on all the indicators released by the government in their report (yet). I have lots of opinions on that, stay tuned. What I really want to talk about are the sanctions and how ridiculously short sighted they are. In my opinion, the sanctions were a publicity stunt designed to make people who don't know any better think that the administration was doing something significant.

Now I'll admit declaring 35 Russian operatives in the US persona non grata (PNG) IS significant. It takes time to find and train new embassy operatives (if you believe what you see on The Americans) and this will impact Russian intelligence for some time to come. But expect that Russia will also declare some of our "diplomats" PNG in retaliation. To not do so would nearly be an admission that the Obama administration was right about the hacking. So expect that this is a zero sum game.

But what about the sanctions for people and companies involved in the hacking? This is where things get ridiculous. Few of these diplomats named own property in US, and honestly, if they do I'm fine with it being seized under the sanctions. But the idea that this will impact the three Russian companies named in the announcement is ridiculous.

First, they don't do business in the US. Second, one of the companies listed (Zor Security) has reportedly been closed - sanctioning a closed business is lunacy. Several people have noted that the company still shows active on the Russian business registry. But the owner of the business claims to have shuttered the business some time ago. In any case, she wasn't doing business in the US. Unless Zor Security has assets in the US, the sanction is a publicity stunt.

The Department of Treasury uses powers from the newly expanded Executive Order 16394. If you haven't read the original order, you should start there before reading the amendment just issued. The real problem here is that the language is so broad that if Russia was to adopt the same language, they could sanction huge numbers of NSA and DoD contractors and gov personnel.

Whole sections of DoD contractors are probably researching zero days, writing malware, and planning and executing cyber operations as I write this. What makes them different from the people and organizations sanctioned by the US Government? Maybe the types of cyber operations they engage in. I think the intent was to limit the scope of the Executive Order to only certain types of hacking. But read below and you'll see it's pretty clear they missed the mark.

The definitions in (A) and (B) are pretty broad. Any takers that US contractors haven't performed or materially contributed to one of these operations against a foreign government?

The problem with (C) is that there are often unintended consequences to cyber operations. Intent and impact don't always align. An investigation I was involved in with Rendition Infosec involved an apparent denial of service attack on a database server. After examining logs, the attacker had been there for months executing queries against the database sporadically (to gain intelligence and/or trade secrets). The attacker executed a query that used an inner join operation to create a sub-table and select from there. The tables involved were huge and exhausted available memory. While the query was syntactically correct, it caused the server to stop responding to requests. We can all agree the DBMS should have been more resilient to memory issues, but that's opinion. I'm talking about reality. The database was serving an ERP application so this had significant financial impact for the organization.

The intent was not to cause "significant disruption" but the impact definitely was. The powers granted in the Executive Order make no differentiation between intent and impact and this could be an issue. Even if you don't care about this because "F%$k Russia" remember that they (and others) may choose to judge US citizens by these same standards.

Regarding (D) it's the official US policy to not use hacking to steal trade secrets which are given to US companies for financial benefit. But given the number of classified Executive Orders released by the Snowden leaks, is it really unrealistic to believe this might be happening? Whether or not you think it's happening, isn't it realistic that another government might think so and start sanctioning US contractors providing material support to cyber operations?

Finally, lets not be myopic about (E). There are plenty of reports about the CIA tampering with elections. I'll let you form an opinion here, but I'm pretty sure that if CIA (or whoever) is still tampering with foreign elections, they are using intelligence gained from cyber to do it. That squarely fits the definition for (E).

I'm all for responsible sanctions, but the language used here does not consider potential blowback to US citizens. Since it's largely accepted that China was responsible for the OPM hack, maybe I should be more concerned with China than Russia. And speaking of which, where are the sanctions for Chinese companies providing "material support" to hacking operations?

Make no mistake about it, this Executive Order sends a powerful message. Unfortunately, that message is "here's a road map for how to hurt us more than we can hurt you." Think I'm wrong? If you sell NSA an 0-day (or just give it away because you're a patriot), you would almost certainly fall squarely in the definitions of this EO. Think this publicity stunt makes the US stronger? Think again...