According to a Dark Reading interview with Brian Engle, executive director of the R-CISC:

“[The R-CISC] evaluated a number of different platforms to help enable information-sharing for retailers…and given the statge of [R-CISC’s] maturity, and the amount of interaction with the financial services industry, we selected FS-ISAC’s portal and technology platform. Our portal rides on the same technology as the FS-ISAC’s, but there’s a separate instantiation for retail.”

The R-CISC was created in 2014 after a rash of high-profile retail breaches, including Target and Home Depot. The threat intelligence portal represents a significant upgrade for the retail industry, which had previously been sharing threat intelligence, such as indicators of compromise, through email distribution lists.

The push for threat intelligence sharing is a great initiative for the retail industry. The STIX format developed at Mitre has become a de-facto standard for threat sharing between major Financial Services during the past year. It allows an organization to share key threat data – including the addresses of remote servers used in the attack and the malware fingerprint, among other attributes, in a suitably anonymized form, without breaching confidentiality. STIX and other open threat indicator formats are of great importance because they allow sharing of information between different vendor tool-sets. Contrast this with the proprietary formats of traditional signature feeds from major anti-virus vendors, and you should realize this is a major advance for the industry.

Kudos to the retail industry for its effort in implementing this threat intelligence initiative. Of course, the more cynical among us may believe that these threat intelligence initiatives are putting the cart ahead of the horse. Case in point, this week, MWR Infosecurity published its report, “Threat Intelligence: Collecting, Analyzing, Evaluating,” which contends:

Threat intelligence is at high risk of becoming a buzzword. With so many disparate offerings and so much pressure to be ‘doing’ threat intelligence, organisations risk investing large amounts of time and money with little positive effect on security.

However, the report does take a pragmatic approach:

However, by taking threat intelligence back to its intelligence roots and applying the same strict principles, a far more effective strategy can be devised. As is the case with traditional intelligence, tackling cyber threats demands rigorous planning, execution and evaluation. Only then can an organisation hope to target its defences effectively, increase its awareness of threats, and improve its response to potential attacks.

This is good advice. At the end of the day, the value of threat intelligence is only worth what you can do with it.

News this week of the Dridex malware campaign (the newest member of the GameOver Zeus Trojan family) should serve as a reminder that you can’t stop what you can’t see. According to the research, the attack vectors remain the same as it ever was, in this instance the malware is executed through phishing emails and Microsoft Office exploits. Additionally, the attack leverages social engineering that convinces its targets to enable the macros required to deliver the malicious payload.

Most interestingly, the attack would not execute until the document was closed, utilizing a method called AutoClose to evade detection.

According to the research, this technique is effective against sandbox detection capabilities. The research notes:

“As sandboxes have adjusted to also ‘wait,’ the ability of the malicious macro to run when the document closes expands the infection window and forces a detection sandbox to monitor longer and possibly miss the infection altogether. No matter how long the sandbox waits, infection will not occur, and if the sandbox shuts down or exits without closing the document, the infection action will be missed entirely.”

Does it seem like we’re stuck on a hamster wheel? Sandbox detection has become a popular security technology in the past five years, in part because the vendors of these solutions convinced their buyers that existing solutions created a security gap. However, as sandbox detection has become widely deployed, attackers have turned their attention to defeating them. We’ve seen malware that monitors mouse clicks to evade detection, malware that sleeps or stalls execution to evade detection and even malware that determines the presence of detection engines and sandboxes to evade detection.

The only logical conclusion is that you can’t prevent what you can’t detect, so this iteration of the Dridex malware should serve as a reminder (or a wake-up call if you’re still snoozing) that attackers are becoming increasingly savvy at evading detection, even in the face of “advanced” detection solutions. Detection is not enough. It is time to take a proactive approach to security. Develop a posture based on isolation and prevention instead of reacting with detection and response.

That’s right, this network security admin posted a list of vulnerable IP addresses on a public Facebook page. There was an optimistic belief on reddit that these were the IP addresses of honey pots, but most of the comments were much more critical.

I’m practically at a loss for words. In what world does a network security admin think that it is a good idea to publically post the IP addresses vulnerable to a specific exploit? It turns out that these are network devices that are his responsibility, so perhaps everything will click into place for him after he blocks Internet access to them.

Or perhaps this really is all just an elaborate ploy to send traffic to a honey pot, in which case, be sure to share this blog with all of your colleagues.

The security industry was whipped into a frenzy this week with the discovery of the FREAK vulnerability, which enables a determined attacker to downgrade SSL traffic from “strong” RSA encryption to “export-grade” RSA encryption. The vulnerability exists because of a U.S. government policy from the 1990s that required weaker “export-grade” encryption in products sold to foreign countries – essentially, a backdoor.

The exploit is described in detail by the researchers at SmackTLS.com. The Washington Post describes it well:

“The export-grade encryption had 512 bits, the maximum allowed under U.S. restrictions designed to limit trade in military technologies in the 1990s, during a an era often called “The Crypto Wars” because of pitched political battles over deploying cryptographic algorithms that even advanced government computers had trouble cracking. But 512-bit cryptography has been considered unacceptably weak for more than a decade. Even experts thought it had disappeared.”

While the discovery of the FREAK vulnerability may be new, it is a classic man-in-the-middle attack. However, the news that Microsoft Windows is also vulnerable means FREAK is far more serious than we initially though.

The FREAK vulnerability provides another proof point in a long line that underscores how security principles from the last millennium are no long applicable in the cloud and mobile era; especially when you factor in SSL traffic.

The network security perimeter is often described as a moat around the castle, which is a great analogy, except that we don’t live in castles and our attackers don’t come riding in on horseback.

However, the real damage done by these sort of vulnerabilities is against the trust of the security industry. When government interference deliberately weakens security, the government is left looking humiliated by its own policies. The problem with FREAK is forcing devices to use old “export-grade” encryption that was inherently weak. In the 1990s, these weak ciphers required the computing resources of a nation state to brute force, but now you can rent a cloud cluster to do the work in a few hours for $100.

FREAK raises some serious questions about how the security protocols of days past may affect us today and in the future. To prepare for this future, we must abandon some principles of the past. Latent vulnerabilities will surface in older infrastructure. Attackers will exploit any opportunity and the legacy base is full of holes, so CIOs need to continually upgrade and patch where they can. The security industry must embrace new architectures that can prevent cyber attacks even when vulnerabilities exist. That’s a big part of what we’re working on here at Bromium.”

I think we are all familiar with the obvious costs of poor security. Millions of dollars lost recovering from breaches, brand damage and etc. This is pretty much the conventional wisdom now days. Luckily my job includes speaking and interacting with customers that are using, or are considering trying Bromium to help them with countering all the nasty threats that seem to get past their defenses no matter what they try or how hard they work. This seems to be a timely topic as there have been a few good surveys including one by Ponemon about security operations costs.

What I find striking when speaking with these folks and reading the reports is the amount of time, effort and money security teams put into securing their organizations, often fruitlessly. Nothing is more frustrating than busting your rear end to solve a problem you have spent your entire career studying, and having at best, moderate success. Security pros rarely get credit for the attacks they somehow manage to stop, but they certainly get the blame for the attacks that get through.

While one time costs dealing with a breach are certainly big, I think the long term costs of the unproductive efforts and activities security and IT operations teams routinely go through to try and stem the tide may be just as large over time, and may be increasing at a higher rate.

The trend over the last several years has been towards “continuous monitoring” to detect when bad guys have broken through the defenses and “remediation” to recover from the attack.. I thought it would be an interesting exercise to try and measure the true cost of these efforts in each organization. With that in mind we developed a calculator to help people to compute the complete costs of their security efforts. Take a look if you have a chance and when you think of the costs of security don’t forget about the unsung heroes in the front lines day in and day out and what they do to help keep us all secure.

Yesterday, the Government Accountability Office (GAO) released “FAA Needs to Address Weaknesses in Air Traffic Control Systems,” a report that highlights the improvements the Federal Aviation Administration (FAA) needs to make to its critical air traffic control systems. The FAA operates 100 air traffic control systems that include radar, weather, flight plans, surveillance, in-flight communication, navigation and landing. These FAA IT systems are automated and complex, including hardware, software and telecommunication equipment.

The GAO report identifies intentional and unintentional threats against the FAA, noting that the interconnectivity of FAA systems increases the opportunity for cyberattack. Unintentional threats are a bit of a misnomer since they simply refer to software programming errors that could negatively impact operations; it would have been less inflammatory if the GAO had referred to this as risk. The GAO defines intentional threats as terrorists, criminals and foreign nations and includes the possibility of Advanced Persistent Threats (APTs) from well-organized attackers.

The GAO report concedes that the FAA has taken steps to increase security, but ultimately criticizes the FAA for weakness in its security program that includes an inability to limit unauthorized access and poor auditing and monitoring of security events. This is the result of the FAA not completely implementing its prescribed security plan.

The GAO provides recommendations for the FAA to improve information security by establishing an integrated, organization-wide information security system, including 170 specific technical recommendations. The GAO contends that the bottom line is that the security of air traffic control systems is critical and must be adequately protected.

It is great that the GAO produced this report and is pressuring the FAA to improve its security posture. There is no doubt that FAA air traffic control systems number among some of the most important pieces of critical infrastructure, along with public utilities, such as power and telecommunications networks. The “Internet of Things” is coming into focus, yet security is so frequently an afterthought. Currently, financially-motivated cyberattacks greatly outnumber attacks on critical infrastructure by several orders of magnitude, but to assume that critical infrastructure won’t be attacked because it hasn’t been attacked is dangerous thinking. I have complete confidence when I take a flight, but for once, it is great to see an organization pushing to get in front of its security challenges before they become a serious issue.