The NSA has experienced its biggest legislative setback in nearly 40 years – but there’s a fishy smell to it, reports Danny Bradbury

*This feature was originally published in the Q3 2015 issue of Infosecurity – available free in print and digital formats to registered users*

Two years after Edward Snowden blew the whistle on the NSA, Congress has passed a law to rein in its powers. But will it really matter?

The USA Freedom Act finally passed on 2 June. It curtailed bulk data collection at the NSA, which had been vacuuming up metadata about domestic US phone calls and storing them in vast databases. This is the biggest legal ruling on surveillance since the mid-1970s, when the Church Committee was formed to investigate intelligence activities within the US government, following Watergate. It found a widespread telegram interception program, Operation Shamrock, dating back to 1945, whereby the NSA enlisted three US communications carriers to secretly provide it with copies of all telegrams sent to foreign parties. This also enabled it to gather information about US citizens on a secret watchlist.

The outcome was the 1978 Foreign Intelligence Surveillance Act (FISA), which established an oversight procedure, and the Foreign Intelligence Surveillance Court (FISC) whose jurisdiction is activities relating to foreign intelligence.

A Long History of Surveillance

The Church Committee, recalled investigator L Britt Snider in 1999, “caused the NSA to institute a system which keeps it within the bounds of US law and focused on its essential mission.” Then came 9/11: one of whose outcomes was a culture of secret surveillance of US citizens, and, ultimately, the biggest exposé in history.

In 2001, President Bush signed an order allowing the NSA to monitor international telephone calls and email messages without warrants to search for terrorists. The agency, criticized by the 9/11 Commission for its adherence to strict oversight, began collecting information without applying for FISC approval.

It later transpired that the NSA had conspired with AT&T, BellSouth and Verizon to gather a vast database of domestic telephone call records. In 2013, the Guardian uncovered a court order requiring communications giant Verizon to give the NSA metadata from calls within its systems, both domestically and to other countries. That order was obtained by the FBI from the FISC, part of an ongoing bulk telephone metadata collection program authorized by the court in 2006.

The US government was then able to use these records to search all telephone numbers that directly communicated with a target, and also search any numbers that were in contact with those numbers (a second ‘hop’). Then, by conducting another third ‘hop’, NSA officials could determine who constituted a target. Making things worse, Section 215 of the Patriot Act, passed in 2001, made it easier for intelligence agencies to gather this and other information. It amended FISA, making it easier to gather information from both US and non-US citizens, and expanded the scope of surveillance orders.

Court Decision

Jim Sensenbrenner, who penned the Patriot Act, said in 2013 that bulk collection of call record metadata was “never the intent” of the legislation. Yet only weeks later, the American Civil Liberties Union sued director of national intelligence, James Clapper, and others in the government. The Union argued the program must be stopped and records purged, as such activities violate the first and fourth amendments. Its case was finally successful in May 2015.

"You're not allowed to collect surveillance data on people without probable cause. That separates us from East Germany"Bruce Schneier, Resilient Systems

By that point, Section 215 was nearing its end-of-life, due to ‘sunset’ on 1 June; Congress was busily working on extension legislation. The USA Freedom Act had already been voted down once in the 113th Congress.

A watered-down version of the bill, sponsored by Sensenbrenner, was under negotiation. It would extend the Section 215 provisions, but with significant caveats designed to quash the bulk collection of telephone metadata.

A Red Herring

The Act failed to pass by midnight on 31 May, leaving the intelligence community with dramatically reduced surveillance powers. Congress panicked. On 2 June, the bill was passed. The new legislation curtailed several collection methods. It targeted the collection of business records under Section 215, but also National Security Letters, which the FBI can use to demand customer records from organizations including telcos, while preventing them from informing customers.

The law also placed restrictions on the use of ‘pen registers’, devices that monitor specific phone lines. These were used to gather bulk metadata information until 2011, following a FISC-approved order in 2004. The USA Freedom Act requires that these collection methods be used with specific selectors to limit the number of records gathered. It also appoints an amicus as an independent voice in FISC hearings, which have hitherto been held in secret.

On the face of it, this sounds like great privacy reform, and a vindication of Edward Snowden’s whistleblowing. But privacy advocate and Resilient Systems CTO Bruce Schneier is highly critical: “It’s definitely vindication, but it’s also a red herring. It’s both at the same time.”

Retired NSA agent Kirk Wiebe, who worked at the agency from 1975 to 2001, has concerns about the act itself, and the adjustments voluntarily made by Obama in February 2014. He criticizes Obama’s ‘two hops’ limit: “Although collection is limited to two hops, what if the first hop from a suspected/known criminal or terrorist is the IRS? That would mean everyone who ever called the IRS is two hops from the bad guy and subject to collection,” he said. “So while pure bulk collection may end under the Freedom Act, ‘bulky’ collection is still possible.”

Plenty More Fish in the Sea

The act may have helped to quash bulk phone metadata collection using the mechanisms listed, but there are others. One of these is Executive Order 12333, a Reagan-era presidential order which carries similar powers to a federal law. Written 20 years before the web existed, this law permits the gathering of metadata and message content. Former NSA agent turned whistleblower, William Binney, is particularly concerned about section 2.3C of the order, which authorizes intelligence agencies to collect “information obtained in the course of a lawful foreign intelligence, counter-intelligence, international narcotics or international terrorism investigation.”

All of this creates a huge opportunity for incidental data collection about the communications of US citizens, he warns: “If you get any US data you can keep it and distribute it, as long as you’re looking for a terrorist or a dope dealer.”

“The NSA does that under that criteria, but they keep all the data they collect,” he adds. “Then the FBI and the CIA come in and look at the data internally in the US databases for anything that they want. There’s no oversight to that.”

EO 12333 isn’t the only way to obtain information on US citizens, warns Julian Sanchez, a senior fellow and privacy rights watcher at the Cato Institute. Section 702 of the 1978 FISA legislation also grants data collection powers, he points out. It is less egregious than Reagan’s order, still requiring a FISC review of data collection, although FISC plays no role in actually approving the target.

“At last count there are 90,000 targets under the authority. To my mind that fits as cleanly as anything could the definition of a general warrant,” says Sanchez. “All of those communications are intercepted, including the communications of Americans.”

Trawling for Data

Sanchez suggests that searching incidentally-collected domestic information stored in 702-related databases is a way of gathering information without a warrant, in what has become known as a backdoor search. This is what was taken out of the USA Freedom Act’s final version. “The proposal was that, if you wanted to search these databases for American communications, you’d have to take the same steps as if you did it directly. That was unfortunately removed.”

James Lewis, director at the Center for Strategic and International Studies, has a different perspective: “The Freedom Act is useful because the NSA used to authorize itself, and that isn't how it's supposed to work.”

But ultimately, nothing much will change, he suggests: “It will change some of the procedures around collection and make the NSA and FBI jump through some additional hoops, but for communications surveillance, I don’t think it changes very much.”

The USA Freedom Act also curtails some mechanisms already ruled illegal by appellate court, including the direct collection of bulk call metadata directly by the NSA. However, it still leaves the data in the hands of the phone companies, and allows it to be queried by the NSA using targeted selectors.

This worries Gene Tsudik, a professor in the computer science department at the University of California, Irvine. “This stuff represents a treasure trove of information, and an attractive target for attacks,” he says. “I believe that if metadata has to be kept for some time, it is best to split it in a way that neither NSA nor the phone company can make sense of it, without cooperation.”

There are cryptography technologies for that, but there are no provisions for this as it stands. In any case, the NSA is still legally capable of collecting bulk metadata and (in some cases) bulk content on foreign targets which generate significant amounts of data on US citizens inside the country.

Should bulk data collection be a part of the US surveillance machine? “Against innocent people? No,” says Schneier. “That’s not what democracy does. You’re not allowed to collect surveillance data on people without probable cause. That’s not one of the things we do. That separates us from East Germany.”

The battle between privacy advocates and surveillance hawks in the US has been long. And difficult. And it isn’t over yet.

Timeline of Key Events

1945

Government approaches telcos to secretly provide telegrams for national security purposes

1952

NSA formed

1975

Church Committee formed to investigate intelligence activities in the US

2014

ACLU appeals decision

2015

ACLU wins case in appellate court

USA Freedom Act passes

]]>

Thu, 30 Jul 2015 13:00:00 GMThttp://www.infosecurity-magazine.com/magazine-features/nsa-business-as-usual/Windows Server 2003: End of the Roadhttp://www.infosecurity-magazine.com/magazine-features/windows-server-2003-end-of-the-road/
It’s the end of Windows Server 2003 as we know it. Do you feel fine? asks Johna Till JohnsonWindows Server 2003: End of the Road

It’s the end of Windows Server 2003 as we know it. Do you feel fine? asks Johna Till Johnson

Unless you’ve been living under a server rack for the past three years, you’ll be aware that on 14 July Windows Server 2003 reaches its end-of-life. Microsoft will no longer provide general support, bug fixes, or security patches for the OS. The company will no longer even report on security flaws in WS 2003, and will cease to update or support the endpoint security tools offered for it.

If you’re among the estimated near two-thirds of organizations (according to App Zero) that still have WS 2003 in your enterprise, it’s not too late to take action. You have more options than you may realize, but it’s imperative to tackle the problem now.

There are three main issues that will hit on 15 July. First is security; unsupported WS 2003 machines will create a huge vulnerability in your enterprise. As of early June, there have been 25 documented WS 2003 vulnerabilities in 2015, compared with 26 in total in 2014. These range from denial of service (DoS) vulnerabilities to buffer overflow to code-execution issues. So far, they’ve been patched, but that’s not going to happen going forward.

“We’ve seen an uptick in scans, of hackers trying to take inventory to find out who’s running these systems,” says Chris Strand, senior director of compliance and governance at endpoint and server security firm Bit9 + Carbon Black. So the chances are extremely high that your systems will be hit in the 30 days immediately post end-of-life.

But it gets worse. The second major issue is compliance. Virtually every organization is subject to regulation – such as PCI, HIPAA, or Dodd-Frank – and most regulations require vulnerabilities to be patched within 30 days of discovery, something that’s not possible if patch updates aren’t happening.

“We’ve seen an uptick in scans, of hackers trying to take inventory to find out who’s running these systems”Chris Strand, Bit9 + Carbon Black

Moreover, if an organization is running outdated or unsupported software, it can be subject to additional fines and penalties. So regardless of whether your systems are actually compromised, you’ll fail your next compliance audit.

Finally, there’s the issue of cost. The cost of supporting an obsolete OS is high and will keep on rising, based on everything from the extra work required to keep the system running to the outmoded hardware it’s likely running on. And for enterprises large enough to negotiate a custom support agreement (CSA) with Microsoft, fees can be exorbitant, starting at $1500 per server per year, and compounding annually. (And note that CSAs are only available to organizations that already have a remediation plan in place).

Supporting the WS 2003 operating environment will continue to be a slow drain on your resources, consuming time and effort you could have devoted to something else. The bottom line is that inaction is both dangerous and expensive. This is one deadline you can’t afford to ignore.

What’s The Plan, Stan?

There are several remediation strategies for the WS 2003 end-of-life issues. The most obvious fix is to migrate applications off it. But to where? One option, of course, is to migrate to later OSs, most likely WS 2012.

Another is to take the opportunity to move to the cloud, specifically Microsoft’s Azure. The challenge is that there may not be enough time. Re-architecting applications to run on a different OS (or porting them to the cloud) takes planning and effort. Apps still running on the old system are often hard to uproot, rewrite, or replace for a variety of reasons: close customization to the OS; a lack of application vendor support; or a lack of in-house staff to do a rewrite. So unless you have relatively few applications, migration is probably not a near-term solution.

Another approach is to replace your old applications entirely, relying on software-as-a-service (SaaS) or other solutions. For instance, rather than porting your elderly custom CRM application to WS 2012, you might opt to transition to, say, Salesforce.

Microsoft may negotiate costly custom support agreements with large organizations to extend support for WS 2003

Moving to SaaS is an option that IT professionals should seriously consider, ideally as part of an overarching cloud strategy. But once again, timing doesn’t permit this approach as a quick fix.

What’s left? You could attempt to isolate and protect systems by segmenting behind firewalls, load balancers or other systems that can filter connectivity. This will improve security from low-level and external attacks, but will be less able to protect from application-level attacks that exploit previously undiscovered OS-level flaws, or threats propagating within the protected space. This approach also has the weakness of making systems and the applications they support less reachable by the lines of business.

At the extreme, systems can be placed off-net entirely. This could apply in some healthcare, manufacturing, and other scenarios, for example when a system controls a machine tool or a piece of lab equipment via a dedicated or embedded 2003 server. However, the number of systems that can actually operate off-net is shrinking fast as systems increasingly depend on connectivity.

What’s left? Fortunately, many security vendors have developed security products that use ‘defense-in-depth’ techniques such as virtual patching, application control, endpoint control, and ongoing monitoring to keep the servers protected beyond the end-of-life deadline.

Beefing up security by implementing such systems has two advantages. First, it buys you time to develop a more overarching strategy that covers not only WS 2003 but all your computing platforms. Most likely this will involve some combination of infrastructure-as-a-service (e.g. Azure), software-as-a-service, and private cloud. Since it’s a big shift, you’ll want to take your time planning and executing this strategy.

Putting It All Together

So if you’ve still got apps running on WS 2003, what should you do? The answer depends on your environment. If there aren’t many, and they aren’t a critical part of your environment, you can migrate them to WS 2012 or Azure. Or, you can replace them with a SaaS solution, assuming your WS 2003 environment is sufficiently contained for this to be feasible in the few days remaining.

Migrating to WS 2012 is one option facing IT teams

If your environment is more extensive than you can handle via migration or replacement, you can segment the servers (or take them offline entirely), assuming this doesn’t affect usability. Note, however, that this is strictly an interim fix: you’re still liable from a compliance standpoint, and you’re still vulnerable to some forms of attack.

You could also invest in defense-in-depth solutions that provide both protection and compliance validation. This approach buys you time, and also moves you in the right direction from a security standpoint.

Assuming you opt for a solution other than migration or replacement, how much longer should you plan to keep your WS 2003 machines operational? The answer once again depends on how heavy your dependence on WS 2003 is. If your environment is extensive, you should accelerate your migration or replacement strategy, because securing and managing an obsolete OS (and its associated applications and hardware) is likely costing you quite a bit. If your environment is more limited and/or self-contained, you may be able to support the servers for longer.

A good rule of thumb is 30 months on the outside. That is, regardless of your situation, you should be off WS 2003 by 2018. Many of the security vendors won’t commit to supporting the platform beyond 2018, and even if they did, it’s almost certain that your hardware and overall architecture will be obsolete.

And remember, that’s the outside: if you can wrap up a migration or replacement strategy by the end of 2015, so much the better. You’ll have more time, energy, and resources to focus on doing something truly innovative for your organization.

Taking action doesn’t necessarily mean an emergency forklift upgrade. There are plenty of options for buying yourself time and staying protected and compliant.

Options for WS 2003 EOL Remediation

Server migration – Migrate your applications to up-to-date servers, most likely WS 2012. Consider this if you have a limited number of servers and do not yet have a cloud strategy in place.

Cloud migration – Migrate your applications to IaaS cloud services, most likely Azure. Consider this if you have a cloud strategy in place, and application migration makes sense in that context.

Application replacement – Replace your applications with more modern ones, including SaaS. Consider this if you have a cloud strategy in place, and application replacement makes sense in that context.

Segmentation – Move your WS 2003 machines behind firewalls and gateways, or in an extreme scenario, take them offline entirely. Consider this if there is a limited set of users accessing applications, but remember you won’t be protected against app-layer threats or compliance concerns.

Augment with defense in depth – Add defense-in-depth technology to your security arsenal. Look for products that can provide real-time monitoring, centralized logging and enforcement, compliance, and the ability to integrate into your strategy going forward. Plan a gradual migration away from WS 2003 over the next six to 30 months.

This feature was originally published in the Q3 2015 issue of Infosecurity – available free in print and digital formats to registered users

]]>Tue, 14 Jul 2015 13:25:00 GMThttp://www.infosecurity-magazine.com/magazine-features/windows-server-2003-end-of-the-road/Tales from the Crypt: Hardware vs Softwarehttp://www.infosecurity-magazine.com/magazine-features/tales-crypt-hardware-software/
Encryption is never out of the spotlight in this industry, but the methods that businesses can deploy to encrypt their data are wide-ranging. Daniel Brecht examines the pros and cons of the various solutions on offerTales from the Crypt: Hardware vs Software

Encryption is never out of the spotlight in this industry, but the methods that businesses can deploy to encrypt their data are wide-ranging. Daniel Brecht examines the pros and cons of the various solutions on offer

With the use of mobile devices booming, and attacks against government networks and business databases escalating, data security has become a hot topic for IT system managers and users alike. Today’s technology advances have spurred a number of solutions to meet the requirements and the pockets of everybody who needs to secure a machine, from a simple home computer, to the most sophisticated networks. Sorting through so many different solutions, however, can be overwhelming.

Whether to opt for software-based or hardware-based solutions is the first decision users are faced with, and it’s not an easy choice. Although both technologies combat unauthorized access to data, they do have different features and must be evaluated carefully before implementation.

Software-Based Encryption

Software encryption programs are more prevalent than hardware solutions today. As they can be used to protect all devices within an organization, these solutions can be cost effective as well as easy to use, upgrade and update. Software encryption is readily available for all major operating systems and can protect data at rest, in transit, and stored on different devices. Software-based encryption often includes additional security features that complement encryption, which cannot come directly from the hardware.

Hardware encryption is most advisable when protecting data on portable devices

The protection granted by these solutions, however, is as strong as the level of security of the operating system of the device. A security flaw in the OS can easily compromise the security provided by the encryption code. Encryption software can also be complicated to configure for advanced use and, potentially, could be turned off by users. Performance degradation is a notable problem with this type of encryption.

Hardware-Based Encryption

Hardware-based encryption uses a device’s on-board security to perform encryption and decryption. It is self-contained and does not require the help of any additional software. Therefore, it is essentially free from the possibility of contamination, malicious code infection, or vulnerability.

When a device is used on a host computer, a good hardware-based solution requires no drivers to be loaded, so no interaction with the processes of the host system is required. It also requires minimum configuration and user interaction and does not cause performance degradation.

A hardware-based solution is most advisable when protecting sensitive data on a portable device such as a laptop or a USB flash drive; it is also effective when protecting data at rest. Drives containing sensitive data like that pertaining to financial, healthcare or government fields are better protected through hardware keys that can be effective even if drives are stolen and installed in other computers.

Self-encrypted drives (SEDs) are an excellent option for high-security environments. With SEDs, the encryption is on the drive media where the disk encryption key (DEK) used to encrypt and decrypt is securely stored. The DEK relies on a drive controller to automatically encrypt all data to the drive and decrypt it as it leaves the drive. Nothing, from the encryption keys to the authentication of the user, is exposed in the memory or processor of the host computer, making the system less vulnerable to attacks aimed at the encryption key.

“Software is easier because it is more flexible and hardware is faster when that is needed"Bruce Schneier, Resilient Systems

Hardware-based encryption offers stronger resilience against some common, not-so-sophisticated attacks. In general, malicious hackers won’t be able to apply brute-force attacks to a hardware-encrypted system as the crypto module will shut down the system and possibly compromise data after a certain number of password-cracking attempts. With software-based solutions, however, hackers might be able to locate and possibly reset the counters as well as copy the encrypted file to different systems for parallel cracking attempts.

Hardware solutions, however, might be impractical due to cost. Hardware encryption is also tied to a particular device and one solution cannot be applied to the entire system and all its parts. Updates are also possible only through device substitution.

The Debate

There is no single answer to companies’ encryption needs, stresses Bruce Schneier, CTO of Resilient Systems and creator of the blog Schneier on Security.

“Software is easier because it is more flexible,” he says, “and hardware is faster when that is needed. My preference is software, because I tend to use general purpose hardware and specific software. So my email encryption, web encryption, IM encryption is all software. But the software might use the hardware-specific instructions in the Intel chip for encryption.”

Nico de Corato, telecommunication engineer and founder of DubaiBlog, has a similar approach when it comes to choosing encryption solutions: “Each device requires software in order to operate, and a device is nothing else than hardware. You could not really choose between hardware and software; there is a total interdependence.”

The solutions used depend on the needs of the individual, he adds: “In some cases you can choose, and often I’m the one preferring software solutions. For example, if you need to buy a new GPS, the best solution is probably to download the application on your existing devices (eg a smartphone). This way, you are always going to have the GPS with you; you are going to pay much less than buying a new GPS-device. The same goes for encryption software solutions.”

Companies need to consider factors like impact on performance, backup, security and available resources to decide on proper encryption implementation. Businesses should consider the risks involved in losing the data they handle, but also how long they need to keep data encrypted and how well they would be able to manage encrypting keys with each solution.

It is also important, in light of the strict regulations that have been issued for data protection (such as HIPAA and PCI), that businesses choose the solution that allows them to be fully compliant.

Different considerations guide the choice. According to Tom Brennan, managing partner of cybersecurity consulting company ProactiveRISK, “In the commercial space it is mostly about price. With .GOV clients, it is more about data classification right.”

When budget is a concern, the choice is often to steer away from hardware-based solutions in favor of software solutions that can be implemented across the board. In addition, “rather than deal with the expense and inconvenience of being locked into upgrading one proprietary hardware platform every few years, some prefer to use software,” Brennan adds.

Mobile working practices necessitate a considered approach to encryption for organizations

Industry Models

“Recent security breaches in multiple industries – including entertainment, retail, and healthcare – tell us that large enterprises are not paying enough attention to security best practices,” says Dan Timpson, CTO at certificate authority DigiCert.

“In addition, many of these companies lack basic security measures. According to the Online Trust Alliance, 90% of data breaches in 2014 could have been prevented.”

The potential consequence of a data, privacy, or network security breach is very significant. According to the Ponemon Institute’s 2014 Cost of a Data Breach Study, data breaches now cost $3.5m on average, and the cost per lost or stolen record is $145. In a previous report, the Ponemon Institute reported that the average value of a lost laptop is $49,246, with only 2% accounting for the hardware replacement costs. Encryption could abate this sum by $20,000 as it prevents criminals from accessing and using data contained within.

Sometimes the size of a company makes for a different approach. Larger companies with massive security departments and large budgets probably already have a valid security posture, but smaller businesses might not be treating the issue with the importance it deserves. Many SMB managers believe that only larger companies are the target of malicious hackers. That couldn’t be further from the truth.

Symantec’s 2014Internet Security Threat Report showed that companies with less than 250 employees accounted for more than half of all targeted attacks (61%) in 2013, an 11% increase from the previous year. A study by the National Cyber Security Alliance reported that 20% of small businesses fall victim to cybercrime each year.

Timpson comments that “using software-based encryption is straightforward and may be more approachable for a smaller business that does not have an on-site IT admin dedicated to data security measures.”

However, this is a valid solution only if companies realize that “the need to outsource this work brings the responsibility to find companies that are trustworthy and vet their products and services to ensure a good fit,” he adds.

Timpson believes that “introducing a third party increases the potential for vulnerability.” Although hardware encryption is perceived as more costly due to the upfront investments that are needed to supply an entire organization, Timpson believes that “in the long run, hardware can reduce costs with IT labor, user productivity, and licensing fees.”

So, what is the best solution to protect data? It depends on where you are trying to protect it.

When data is at rest, especially on removable devices, hardware-based encryption is often best. By encrypting entire disks or USB drives, everything is secure, from directories to file systems to content. Authentication should be done prior to booting so that not even the OS is started if the user is unauthorized. However, smaller companies might find it hard to justify the expense even for the added security and better systems performance.

If data is in transit, however, file level encryption is more appropriate: files and folders are singularly encrypted and stay encrypted regardless of how and where they are transferred. Possibly less expensive, these solutions are prone to a number of drawbacks from performance degradation to less-than-perfect protection due to hackers exploiting OS and memory vulnerabilities that expose encryption keys.

New theories and technology advances could eventually change that. As Andrew Avanessian, executive vice-president of consultancy and technology services at endpoint security software firm Avecto, explains, “AES instruction sets, which are included in some modern processors, allow software encryption to be more efficient and perform better without relying on dedicated hardware but applications need to be optimized to take advantage of this.”

Choosing carefully is paramount, but there is no place for indecision. Avanessian believes the real problem is that “some organizations can get hung up about encrypting devices and end up delaying implementations. With the increasing portability of devices and BYOD, it is important to get some level of encryption setup as soon as possible.”

Encryption is necessary and is the best mechanism to protect data confidentiality, integrity and genuineness. It minimizes the chance of security breaches and adds layers of protection to secure data. Costs related to data loss and requirements dictated by law should be incentive enough for all businesses to adopt solutions, regardless of whether they are hardware-based or software-based.

This feature was originally published in the Q2 2015 issue of Infosecurity – available free in print and digital formats to registered users

You can insure yourself against cyber-attack, says Danny Bradbury, but be warned, prices are going up

Information security is all about mitigating risk. Savvy CISOs spend their time asking what threats their organizations face, how deeply these threats would sink the company, and how likely they are. In that sense, CISOs are suitable customers for another industry that’s also about risk management: insurance. So why haven’t the two overlapped more?

A Young Industry

At its heart, insurance is about the paid transfer of risk. Companies have been happily transferring their risk to insurance firms since the late 1600s, when economists created insurance services in response to the Great Fire of London.

Traditional risks, such as fire, flood, theft and injury, are well understood. On the other hand, the insurance industry is just getting to grips with cyber-risk.

“When we started looking for the first time at the issue of cyber-attacks and determining whether it would make sense to have a cyber-insurance policy, it was all green space,” says Ty Sagalow, former COO for AIG e-Business Risk Solutions, now running Innovation Insurance, a consulting firm and brokerage based in New York.

“It was new. There was no actuarial data on frequency or severity. We had to figure out how to create insurance for a risk that we knew very little about,” Sagalow adds.

How do companies manage that risk? Fifteen years is a heartbeat in the insurance business, and so cyber-insurance is still a relatively unknown quantity. The way that insurance companies assess risk involves analyzing past claims. But in a sector with such a short track record and quickly changing characteristics, that isn’t always easy. As such, the market is segmented from the general insurance pool, and covered by special policies.

Strong security controls, including PoS encryption, are often prerequisites for companies seeking cyber-insurance

Insurance companies identify and quantify the exposure, pinpoint the threats, and then make a model of how likely those threats are to occur.

“You have a lot less certainty about that frequency than for more established classes like life insurance or auto insurance, but that isn’t to say that there isn’t any information in the insurance industry,” says Tom Regan, the cyber practice leader for insurance broker Marsh. “We spend a lot of time and money looking to assess the probability of events.”

In any case, insurers have an appetite for risk. After all, that’s what makes them money.

“You don’t go into a new piece of business or a new product because you fear losses. You go in because you hope you’ll be able to make money. If there’s no risk, there’s no reward,” says Sagalow.

Insurers can mitigate their risk in cyber-insurance as they do in other industries, by splitting risk with other insurers, and by using re-insurance, where the insurers are themselves insured by other companies. They can also impose high deductibles.

What Policies Look Like

Typically, cyber-insurance coverage falls into two broad areas: first party and third party. The first party coverage focuses on the internal costs incurred by the company. It covers expenses such as hiring an attorney to deal with the legal ramifications of a breach, and taking on a PR firm to help get out in front of the problem and minimize reputational damage.

Savvy companies will bring in an external data forensics team to find out where the breach occurred, and remediate it. A first party component will also cover the cost of notifying individuals, and potentially even setting up contact centers to field calls from worried customers.

In addition, first party coverage typically covers the restoration of lost data, and it will usually compensate companies for lost business, says Michelle Lopilato, director of the cyber-risk solutions practice at North American insurance brokerage Hub International.

“If your network was breached and goes down, and you’re no longer able to transact business for a certain amount of time, that loss can be replaced,” she says.

"The insurance industry can deal with risks that grow significantly if they can be appropriately compensated"Tom Regan, Marsh

Lost business protection won’t kick in as soon as a disruption occurs. The most aggressive contracts start around six hours after the disruption, but can go as late as 18 hours for companies with poor business continuity operations, she said.

Third party coverage handles the fallout from cybersecurity events that affect other companies and individuals. Typical coverage here includes network maturity liability (if your network is used to infect another company’s systems, for example). It will also cover financial harm to other individuals from a company’s privacy breach, along with the cost of post-breach regulatory investigations and fines.

Rising Prices

Insurance companies are getting better at assessing clients’ cybersecurity readiness, according to Sagalow.

“The industry has matured,” he says. “We have determined that, at least for now, we can continue to underwrite the severity and frequency of cyber-risks, despite the mass attacks that we read about almost every day, whether that be Target, Home Depot, Sony, or others.”

But for how much longer? There are signs that cyber-insurance companies, which have blossomed in number over the last decade, are reacting to industry events.

“The industry is continuing to change and expand, and in certain areas of the business, we see some prices going up,” says Regan. “The insurance industry can deal with risks that grow significantly if they can be appropriately compensated for them. As long as they can get an adequate premium, they’ll be OK.”

Where are those prices likely to hit hardest? Look to retail, says Lopilato.

“We are seeing some tightening of the reins as far as underwriting goes. The insurers are looking for best-in-class controls and securities, and if they don’t have them, then they are getting declinations,” she says.

These controls include encryption at the point of swipe for credit card collection, along with point-of-sale network monitoring, up-to-date security patching, and PCI compliance. “If you can satisfy those four bullet points first, then you do have several carriers still willing to write this business,” she adds.

Hiring a PR firm to deal with the media fallout of a breach is one cost typically covered by first-party cyber-insurance

Companies that take advantage of these policies may even find themselves battling to get coverage. Such was the case with Atlantis National Services, a New York state-based title insurance agency licensed in 32 states. It obtained a cyber-insurance policy through Lloyds of London, after the Department of Homeland Security mandated a data center controls standard, SSAE 16, for title insurers. Atlantis co-founder Radni Davoodi began looking for cyber-insurance not long afterwards.

“It gives banks further comfort using us versus our competitors,” says Davoodi, but he adds that it wasn’t easy to obtain. The industry is still so new that choices are limited, he warns.

“It took us a while to get a quote, and the only broker who was able to provide us with one gave us a cookie cutter and said ‘take it or leave it’,” he says, recalling that there was no option on the deductible or the protection offered. “We’re hoping that in the coming years it will be a little more selective on our end.”

Do customers want insurers to take on their business, though? The Corporate Executive Programme, which monitors corporate security threats, surveyed 40 of its members for a January 2015 report on cyber-insurance. Only one in five companies had dedicated cyber-insurance, it found, and this was among a base of large companies, half of which measured revenues in the billions.

Cyber-insurance adoption also differed dramatically between the US (where 40% of companies had it) and the UK (where just 13% of firms did).

Regan says that regulation makes a big difference in adoption on either side of the Atlantic. In the US, where data breach notification is mandated in 47 states, more companies will be driven to adopt cyber-insurance because of the potential fallout should a breach occur.

"Insurers are looking for best-in-class controls and securities, and if [clients] don't have them, they are getting declinations"Michelle Lopilato, Hub International

Dr Claudia Natanson, chair of the CEP, suggests another factor.

“There was a point given by one of our legal members, stating that it wasn’t so much that the US had breach notification that promoted greater take up, but that unlike Europe, US [companies] could suffer class action suits,” she says.

European adoption will likely rise, adds Natanson. But with an average of four in five companies still not adopting dedicated cyber-risk insurance, there is a lot of potential headroom in this young industry.

Sagalow, who first took steps into cyber-insurance 15 years ago, is already expanding into something new: bitcoin. The cryptocurrency, which is slowly disrupting traditional financial markets, has been beset with security problems. Now, secure bitcoin storage companies are offering peace of mind to users who might hold thousands of dollars-worth in a software wallet. He is working with them to insure their customers against losses incurred in this strange new electronic asset.

“Bitcoin is the new cyber,” Sagalow says, recalling how the internet represented a fundamental shift in how business was done in 2000. “Fast forward 15 years later, and the same thing is happening again.”

Wherever you find uncertainty and risk, you’ll find a forward-thinking insurer exploring ways to underwrite it. The customers may take a little while to come, but if they’re aware of the dangers they’re facing, they’ll arrive eventually.

This feature was originally published in the Q2 2015 issue of Infosecurity – available free in print and digital formats to registered users

]]>Wed, 17 Jun 2015 15:36:00 GMThttp://www.infosecurity-magazine.com/magazine-features/the-rising-cost-of-cyberinsurance/Cybersecurity from Capitol Hill to Whitehallhttp://www.infosecurity-magazine.com/magazine-features/cybersecurity-from-capitol-hill-to/
Proclamations on cybersecurity and government surveillance have ignited political discourse in early 2015. Wendy M. Grossman cuts through the spin to find out what this means for technologists and citizensCybersecurity from Capitol Hill to Whitehall

Proclamations on cybersecurity and government surveillance have ignited political discourse in early 2015. Wendy M. Grossman cuts through the spin to find out what this means for technologists and citizens

Early 2015 saw multiple announcements on cybersecurity from US president Barack Obama and British prime minister David Cameron. Both were responding to recent events, primarily the Sony hack (which is estimated to have cost the company $15m) and the shooting in France of 11 staff at the satirical magazine Charlie Hebdo. The two countries also announced joint ‘cyber wargames’, whereby teams from each country will attack the other to test critical infrastructure.

Obama proposed improving cybersecurity information-sharing between government and the private sector; criminalizing the overseas sale of stolen US financial information; extending the RICO laws to include cybercrime; and requiring national data breach reporting.

The Electronic Frontier Foundation has described the resulting Cybersecurity Information Sharing Act (CISA) introduced in March as a “terrible surveillance bill” because it would allow companies to launch countermeasures against attackers. EFF and the Center for Democracy and Technology also complain that the bill bypasses current privacy protections for private-sector information.

In the run up to the UK’s May general election, Cameron and the home secretary, Theresa May, proposed reviving long-contentious policies: the principle that government must be able to read all communications, and the Communications Data Bill, which opponents have dubbed the ‘Snooper’s Charter’.

These policies would add to an already substantial framework for communications surveillance established in multiple pieces of legislation stretching back to 2001. In March, in the first of a series of planned reviews, the Intelligence and Security Committee (ISC) declared GCHQ’s activities as leaked by Edward Snowden to be legal, but said the law lacks transparency and accountability and could be interpreted as a ‘blank cheque’ for the security services.

Britain’s data protection regulator, information commissioner Christopher Graham, criticized the report for a basic misunderstanding: “At one point in the report they say specifically that if citizens are relatively OK about the security services reading letters and tapping phones with appropriate authorization, then why is the internet any different?

“I thought that represented a very naïve view of what the internet actually is, because it isn’t just another communications channel, it’s the universe through which we are transacting, doing business, [running] our companies, our work, our personal life, and so on. And the idea that that has got to be left open to be inspected by the authorities, whether good or bad, just seems to me to be ludicrous.”

Meanwhile, he adds, the same politicians speak regularly about cybersecurity, but there is an incompatibility in advocating securing communications and infrastructure against myriad threats while ensuring the authorities have access. “I thought it was naïve of the committee to assume that the bad actors wouldn’t take advantage of the vulnerabilities that might be left,” Graham said.

Content: Return of the Crypto Wars

Cameron is not alone in wanting access to encrypted communications. In March 2015, FBI director James Comey asked Congress to enact legislation requiring technology companies such as Apple and Google to include back doors in any encryption built into their products. Around the same time, the FBI removed from its website advice that consumers should protect their data by using encryption.

There are two kinds of objections to key escrow: ideological and technical. Susan Landau, professor of cybersecurity policy at Worcester Polytechnic Institute Department of Social Science and Policy Studies, describes the technical objection.

Both Obama and the newly re-elected Cameron have pushed cybersecurity up their governments' agenda

“Communications tools built with law-enforcement access to the keys will not be secure against skilled opponents. But the use of encryption where the end-users – and not Apple or Google, for example – hold the keys, means, as the president observed, ‘Even though the government has a legitimate request [to wiretap], technologically we cannot do it.’”

Herb Lin, a senior research scholar for cyber-policy at Stanford University, says the ideological objection is simpler: individuals should have full control over access to their own communications.

However, Lin says, it’s impossible to make a mechanism that will stay locked down forever, because computing continues to advance. But 1000 (or 100) years of security is long enough. Meanwhile, 10 seconds is clearly inadequate. “Somewhere between 10 seconds and 100 years there’s a crossover point,” he says.

Performing a risk analysis based on specific proposals and an estimate of how long the cryptography is likely to be secure in that application “would at least get the debate off the theological argument and on to the technical argument.”

Lin also raises a practical issue: company helpdesks are overwhelmed with retrieving and resetting user passwords. “I will bet anything that two to three years after all this unimpeachable encryption gets deployed, they will start offering recovery features,” he says. “People will not want to lose access to their data.”

Likely true, though privacy advocates will argue that choosing a (possibly third-party) key recovery scheme isn’t the same as having one forced upon you.

With six years of communications intelligence in his background, John Walker, visiting professor at the School of Science and Technology at Nottingham Trent University, takes a view more in line with law enforcement concerns about ‘going dark’.

“I respect privacy and I would like to have privacy,” he says, “but what we have to look at with a liberal attitude is whether we can allow insurgents – we’re talking about a global insider threat of which we have to be aware. If the price I have to pay to keep my legs attached to my torso is privacy, then so be it.” The key, he says, is ensuring that the use and exercise of such powers is proportionate and appropriately limited.

Metadata: Bulk Collection

The requirement for ISPs to retain communications traffic data for up to two years was implemented in the EU Data Retention Directive in 2006, a response to the July 7 2005 London bombing attacks. The UK had long favored data retention; a giant centralized database to store the flow was mooted as early as 2000. The 2012 version of this, the Communications Data Bill, would have required communications service providers to collect many forms of data that they currently do not, and disclose it to a substantial range of actors with oversight that opponents such as the Open Rights Group argued was insufficient. The bill failed politically.

In April 2014, the European Court of Justice ruled that the Data Retention Directive conflicted with the European Charter of Human Rights, thereby invalidating the supporting national legislation. In July, Parliament hastily enacted the Data Retention and Investigatory Powers Act (DRIPA) to ensure that ISPs did not begin deleting the stored data during the summer recess.

Security is irrevocably weakened when keys are handed over to a third party

A key element of the Communications Data Bill as proposed in 2012 was ‘black boxes’ to be installed on ISPs’ networks and through which traffic would pass; these would extract the metadata for retention. The Internet Service Providers Association complained about the likely loss of speed; advocacy organizations such as the Open Rights Group compared the idea to a man-in-the-middle attack.

Retention practices such as this raise further questions as to whether the principles of necessity and proportionality are being used in the filtering of data – ‘filtering’ being a term used in early versions of the CDB, though never clearly explained in satisfactory detail. There is a grey area here around intelligence demands for data that isn’t necessarily used in legal proceedings. This is problematic, as is the general opacity of the law.

That opacity is one piece that everyone can agree on. “They already had Tempora,” says Privacy International researcher Richard Tynan. “The police and security agencies said ‘we want this, so make it lawful for us to do what we’re already doing’. To have that as the mindset is the opposite to me of any legal course I’ve done on the rule of law. They will say they can’t do it without authorization, but we don’t know what cannot be authorized by Theresa May. To me, that is an unconstrained system.”

Will Semple, vice-president of security operations for Houston-based Alert Logic and a veteran of both intelligence and financial services, has seen both sides, yet does not think that Cameron’s proposals are “a balanced approach, especially from a military intelligence background and understanding the risks I experienced day in and day out.”

Simon Crosby, co-founder and CTO of the security company Bromium, also calls the government’s policies poorly conceived: “Once [technology companies] start to engineer for security, the ability to provide arbitrary back doors to arbitrary interested parties is just not going to happen – or at the very least Theresa May will have to answer the question of, ‘should Yahoo! provide a back door to China?’”

Crosby, too, agrees that today’s genuine threats require access to data in some circumstances, but he’s scathing about the methods proposed. “They’ve only come out with two so far. One: break everything and be a bad guy, really terrible. Two: they’re going to pass stupid laws for technologies that are literally impossible to develop.”

What’s needed instead, he says, “is a rational debate about how one could legitimately achieve and deliver data in the national interest – and not just the UK and US. The internet is a big place; it’s an international problem.”

This feature was originally published in the Q2 2015 issue of Infosecurity – available free in print and digital formats to registered users

]]>Mon, 08 Jun 2015 12:50:00 GMThttp://www.infosecurity-magazine.com/magazine-features/cybersecurity-from-capitol-hill-to/Lizard Squad: Original Prankstershttp://www.infosecurity-magazine.com/magazine-features/lizard-squad-original-pranksters/
Whether meddling kids or a serious menace, Lizard Squad is part of a phenomenon that is here to stayLizard Squad: Original Pranksters

Whether meddling kids or a serious menace, Lizard Squad is part of a phenomenon that is here to stay, concludes Fahmida Rashid

In 2015 alone, Lizard Squad has already claimed responsibility for hijacking the websites of Malaysia Airlines, Lenovo, and Google Vietnam.

The group’s sole motivation for these attacks – based on its Twitter activity – appears quite simple: because they can. The group considered the Christmas attacks against Xbox Live and PlayStation Networks to be a “sort of a game” carried out for its own amusement, a self-proclaimed Lizard Squad member said in an interview with the UK’s Sky News.

Lizard Squad is becoming a household name because it is prolific, but also because its activities are so visible, says Andrew Hay, director of security research at OpenDNS. The group has relied mainly on DDoS attacks to cause server outages at heavily-trafficked sites. It hasn’t defaced actual company websites, but rather redirected users to spoofed websites to make it seem like the pages are compromised.

“I hesitate to call Lizard Squad hacktivists,” Hay says, noting that hacktivists generally have a call-to-action, a reason for engaging in the attacks. ‘Pranksters’ is a better description, he suggests.

Cyber-attackers are generally categorized by their motivations. Nation-state attackers further the government’s goals, whether that extends to espionage, sabotage, or theft. Cyber-criminals are financially motivated and typically focus on stealing money or valuable assets. Hacktivists are ideologically motivated, and their activities are typically designed to draw attention to something they care about, such as promoting free speech or protesting child pornography. Lizard Squad doesn’t quite fit into any of these brackets.

For the Lulz

Lizard Squad’s activities may evoke memories of LulzSec, an earlier hacking group which took on some high-profile organizations and websites in 2011. Even though LulzSec picked its targets based on ‘lulz’, or laughs, it clearly had hacktivist roots.

LulzSec was originally a disillusioned offshoot of the hacker collective Anonymous intent on exposing “just how bad things were” with security at some of the world’s largest brands, Hay explains. Lizard Squad, in comparison, is “doing what it can for fun.”

Dismissing Lizard Squad just because it doesn’t have an ideology or employ sophisticated attack methods would be a bad idea, says David Francis, a cybersecurity officer at Huawei UK. He adds that it doesn’t matter that the group isn’t using sophisticated tactics to disrupt operations and interfere with user experience, because the fact remains that Lizard Squad did succeed in its goals, and there was an impact on reputation and revenue.

“Whether you class Lizard Squad as pranksters or not is irrelevant; the bottom line is that all organizations, large or small, are subject to attacks,” Francis argues.

Tools of the Trade

Organizations operating online should be concerned about the methods the group uses, says Steve Armstrong, a certified instructor at the SANS Institute. Lizard Squad launches its DDoS attacks using a botnet of compromised routers belonging to home users. Lizard Squad also put Lizard Stresser, a DDoS tool which uses the botnet to launch its own attacks, for sale on its website.

LizardStresser is an IRC Linux bot which attempts to connect to random IP addresses on the internet with default usernames and passwords. Users who may not have changed the default credentials on their routers may find their network devices hijacked into the botnet taking part in these attacks.

The source code was eventually leaked on GitHub, and some security experts who analyzed the code said it was unoriginal and impressive. It didn’t have to be sophisticated – Lizard Squad was able to successfully launch its own attacks, and so were other people who bought the tool. Home routers are notoriously insecure since device manufacturers may take a while to roll out security updates, and users may not know how to install the firmware, which means LizardStresser will continue to be effective.

“It remains unclear what will come of this botnet, but it’s related to Lizard Squad and is more capable than LizardStresser,” the company said.

Organizations have to understand that DDoS attacks are serious because they impact service availability and inconvenience end-users. If the gamers can’t get to the servers to play, they can get annoyed and move on to other games, Hay says.

While many organizations may work with upstream providers to fight back and try to outlast the attack duration, there is the possibility that organizations may just pay a ransom to make the attackers go away.

This can be risky, because the money “could just encourage more attacks,” Hay adds.

Attacks against Xbox Live and PlayStation Networks were a “sort of a game” said a self-proclaimed Lizard Squad member

Cyber Vandalism

During the DDoS attacks against Xbox Live and PlayStation Network in December, Kim Dotcom offered 3000 free vouchers for Mega, his encrypted cloud storage service, to Lizard Squad to cease its activities. While the vouchers did stop the attacks, Hay was concerned about the message this payoff gave to Lizard Squad and other hacker groups.

The vouchers were priced at $99, and there were reports Lizard Squad sold them for $50 each, netting the group at least $150,000 in cash. Considering that DDoS attacks have been growing in volume and intensity over the past few years, a potential financial windfall may encourage more groups to launch attacks.

Vandalism and gaming remain the most popular reasons for DDoS attacks, but attacks acting as a smokescreen for data theft and extortion attempts are also on the rise, says Darren Anstee, director of solutions architects at Arbor Networks. These attacks are disruptive, can cause damage to brand reputation, and increase overall costs for the organization. “DDoS attacks cannot be considered pranks,” says Anstee.

Lizard Squad claimed responsibility for a series of website defacements, including the one against Malaysia Airlines

Lizard Squad modified the records to point to a website under its control, but average users wouldn’t realize they were on the wrong site. This is a tactic frequently used by other hacking groups, such as the Syrian Electronic Army.

Hijacking DNS records can result in considerable damage to the corporate brand because most users and customers will not realize the distinction and assume the company’s servers have been compromised, Hay explains. And if the attackers modify the MX records for the mail server along with the DNS records, then the attackers have access to all the email messages being sent to the company. That has even more serious repercussions to the company’s bottom line.

Organizations need to work with their domain registrars to put in mechanisms to protect themselves, such as two-factor authentication and domain locking to prevent unauthorized changes to DNS records, Hay says. Organizations should pick registrars which have implemented DNS security extensions (DNSSEC) which users can use to verify the site hasn’t been hijacked.

Childish Antics

Whether or not Lizard Squad is just a group of kids with a questionable sense of humor doesn’t matter, because it is not the only hacking group engaged in these activities.

CoreSec is another hacking group engaged in similar activities. The group launched a series of DDoS attacks against Finnish financial services group OP-Pohjola from New Year’s Eve to 4 January. The group demanded ransom between 10 and 100 bitcoins to stop the DDoS attack. At least one member in the group is a Finn, said Mikko Hypponen, chief research officer of F-Secure. CoreSec’s motives for the attack remain murky, but Twitter activity shows CoreSec and LizardSquad consider each other supporters, if not allies, in their cyber-pranking.

Whether you class Lizard Squad as pranksters or not is irrelevant; the bottom line is that all organizations, large or small, are subject to attacksDavid Francis, cybersecurity officer, Huawei UK

The earlier LulzSec is now defunct, with two of its leaders convicted. DerpTrolling has been active more recently, launching a string of DDoS attacks on multiple gaming companies and online gaming servers in early 2014. DerpTrolling was likely just trying to boost its collective ego and its “antics were often childish,” security company CrowdStrike noted in its latest Global Threat Report.

“Despite their immaturity, the collective was able to consistently carry out DDoS attacks on targets of their choosing, and these attacks had a real-world effect on the victims within the gaming community,” wrote CrowdStrike.

The company also noted that Lizard Squad’s antics had real-world consequences beyond the cyber-realm. The group successfully diverted an American Airlines flight carrying a Sony executive by posting on Twitter a rumor about explosives on board. The incident evokes memories of when the Syrian Electronic Army hijacked a media outlet’s Twitter account to post a false report about an explosion at the White House in 2013.

“The threat [Lizard Squad] posed to gaming companies was still noteworthy, especially when combined with terrorist threats; although they were bluster, they still had considerable real-world consequences,” CrowdStrike reported.

Analysis from Recorded Future attempted to identify members of the group by their interests, vernacular, and lifestyle to provide insight into how they operate. The company examined the group’s social media activity for patterns in language and determined the leaders and key members are from the United Kingdom, Canada, or the United States.

Even though Lizard Squad is still seizing headlines, the group’s activity has slowed since December, says Christopher Ahlberg, Recorded Future’s CEO and co-founder. This may have been spurred by Finest Squad, another group which came to light in December and started reporting Lizard Squad accounts to Twitter for abuse, he says.

Lizard Squad’s leaders and key members are most interested in guns, drugs, gaming, and hacking. The intersection of thug-life culture and pro-Nazi sentiments is perplexing, but the fact that one of the accounts associated with the group’s leaders frequently expressed pro-Nazi sentiments may be an indicator of the direction Lizard Squad will be heading in, the company warned.

Instead of dismissing the group, it would “be prudent” to take Lizard Squad’s warnings seriously in 2015, Ahlberg said.

This feature was originally published in the Q2 2015 issue of Infosecurity – available free in print and digital formats to registered users

Phil Muncaster investigates whether an ongoing dispute between Google and Microsoft could change the way we fix security flaws in the future

At the start of the year, Microsoft and Google became embroiled in a very public spat over vulnerability disclosure. The two computing giants, never the best of friends, became more animated than we’ve seen them for some time, exchanging barbed comments via blog posts and social channels. The reason? Google’s Project Zero initiative, announced last July, and its strict rule of revealing vendors’ software bugs publicly after 90 days if they have not been patched.

So who exactly is the bad guy in all of this? Microsoft, for failing to patch as quickly as Google would like, or the Mountain View giant, for disclosing flaws before security fixes were ready? And is the ongoing dispute likely to change how the industry deals with vulnerability disclosure?

A Bit of History

It all kicked off when Google decided to release details of a Windows flaw just two days before it was due to be fixed in January’s Patch Tuesday. The bug itself was not particularly critical, needing a victim machine to have already been compromised in order to work. However, plenty of commenters let their feelings be known on the related Google Security Research forum post.

“Automatically disclosing this vulnerability when a deadline is reached with absolutely zero context strikes me as incredibly irresponsible and I’d have expected a greater degree of care and maturity from a company like Google,” wrote one user.

Microsoft then waded in with a strongly worded blog post from Chris Betz, senior director of the Microsoft Security Response Center.

“Although following through keeps to Google’s announced timeline for disclosure, the decision feels less like principles and more like a ‘gotcha’, with customers the ones who may suffer as a result. What’s right for Google is not always right for customers,” Betz wrote in that post. “We urge Google to make protection of customers our [combined] primary goal.”

This didn’t seem to deter Google, which released details of several additional Microsoft product flaws in the weeks that followed. Here’s the twist though. One batch of disclosures came about before the 90-day deadline, after Microsoft effectively told the web giant that the flaws were so small they were not worth patching. This is despite several of them – including an elevation of privilege issue and an information disclosure bug – being marked as ‘high severity’ by the Project Zero researcher in question.

The waters have been further muddied by Microsoft’s somewhat controversial decision in January to effectively make its Advanced Notification Service (ANS) private. Redmond claims the decision was taken to meet customers’ evolving needs – in other words, that most firms have automatic updates or proper patching regimes which render the public blog posts and notices irrelevant. However, experts argue it was a retrograde step that could at best be viewed as an attempt to hamper transparency into product flaws, and at worse a cynical move designed to make money by forcing customers to upgrade to ‘premier’ status.

The definition of 'responsible' disclosure is something the research and vendor communities often disagree on

Who’s Right?

Google relented recently and allowed vendors a further 14-day grace period on top of the mandatory 90 if a patch is already slated for release, as well as promising not to disclose flaws on US public holidays or at the weekend. But there’s still a fair bit of bad blood about how it has handled the whole affair.

So is this a dispute we should really take sides on? For Nigel Stanley, cybersecurity practice director at consultancy OpenSky, neither firm has covered itself in glory.

“Both Microsoft and Google need to grow up and understand that great care needs to be taken in disclosing vulnerabilities in a calm, controlled way,” he tells Infosecurity. “This will reduce the opportunities for exploits to be developed and give over-worked sysadmins a chance to test and then patch their systems. Instead of throwing stones, those that live in glass houses need to give their neighbors support for the benefit of the broader industry.”

For Ed Skoudis, SANS Institute fellow, Google needs to be a bit more aware of the sheer complexity involved and the huge resources that are needed to create and test fixes for certain vulnerabilities.

“As [Google’s] systems are in the cloud with code they control, there are few hurdles to them throwing resources at a problem and getting fixes out in 90 days or less. Project Zero is a way of Google draining a swamp very quickly,” he tells Infosecurity.

“However, they don’t have the extended enterprise customer base with lots of on-premise software and legacy systems along with strict controls around applying patches,” Skoudis adds. “In some cases, 90 days is just not reasonable and a rushed fix might actually lead to more problems than it solves.”

In fact, that exact scenario has occurred several times of late, most notably in August 2014 after an August Patch Tuesday fix locked computers with the dreaded Blue Screen of Death.

Responsible Disclosure

Most commentators, software vendors, and security researchers agree that responsible disclosure is the best way forward. The problem is, they don’t agree on exactly what ‘responsible’ means.

Some take the extreme view that unless a flaw is made public immediately, the vendor will procrastinate, downplay its importance and possibly even use legal means to silence the researcher – while the bad guys are working on crafting attacks in the meantime. Others say the vendor should be informed privately and given a decent amount of time to fix the flaw.

However, once again the debate rages as to how much time should be allowed and for which kind of flaws, according to Secunia director of research and security, Kasper Lindgaard.

"Instead of throwing stones, those that live in glass houses need to give their neighbors support"Nigel Stanley, OpenSky

Infosecurity asked Lindgaard what represents ‘a timely fashion’ when it comes to giving vendors a chance to fix a vulnerability, before disclosing it.

“Our policy is to give vendors six months to fix the vulnerability and issue a patch, and for a huge majority of the vendors that is plenty of time," he says. "But it is also necessary to be flexible and adapt to circumstances: you have to look at the individual vulnerability – at how critical it is, how complex it is to fix, and how widespread the vulnerable product is.”

Jim Fox, director in KPMG’s cyber team, is actively involved in pen-testing and vulnerability identification. He argues that the most important thing from the vendor’s point of view is to be transparent with its customers.

Even if there’s not a patch immediately available, he explains, the vendor could produce a way to mitigate the problem which would quickly keep customers secure in the meantime – or their customers could come up with their own temporary solutions. Either way, Fox believes the Common Vulnerability Scoring System (CVSS) provides a ready-made, commonly understood framework which could help them prioritize newly discovered flaws.

This is essential given the sheer volume of black hats out there researching flaws, he tells Infosecurity.

“People are taking a methodical approach to identifying and exploiting vulnerabilities in widespread systems. To think only one person will find them is crazy,” Fox adds. “You don’t need to put out a press release each time you find a flaw – that’s irresponsible. But at the same time, if you alert a vendor, say they have a week or 10 days to tell their customers and announce a patch or at least mitigation, that’s fair. The vendors don’t move faster because it’s disruptive for them, so you need to make it in their best interests to do it.”

ISACA international vice president, Ramsés Gallego, agrees that greater transparency is the way forward.

Before broadcasting their findings to the world, researchers should consider the impact on vendors and end-users

“The most important thing to do in the vulnerability management dimension, from a vendor perspective, is communication,” he tells Infosecurity. Gallego believes that in the cyber era, threats will always exist – it’s not a matter of if a company faces a vulnerability, it’s when and how quickly they’ll then recognize and mend it.

A Troubled Future?

Yet for Fox, Microsoft is moving not towards greater transparency, but away from it, as witnessed by its decision in January to end its Advance Notification Service. He argues that failing to inform all customers through notifications means many won’t even be aware of vulnerabilities which malware writers are actively developing exploits for.

“The only people to get hurt will be those who need to defend themselves. Less transparency is a mistake; I rarely learn of a vulnerability through a press release,” he adds.

So what of the future for vulnerability disclosure? Can the vendor community afford to pour more resources into developing timely patches or will the quality of security fixes suffer, and patching times inevitably get longer as the sheer volume of flaws identified mounts?

“Unfortunately, there is a risk that Google may incite copycats that are maybe less wedded to a ‘don’t be evil’ philosophy,” he argues. “In future, we could have others pushing out zero days into the public forums that are incredibly dangerous without warning. And then what started out as a positive approach could turn into a major issue for everybody.”

In the meantime, it’s the sysadmins – the “poor bloody infantry” – who will be forced to pick up the pieces, according to Stanley.

“Some vendors forget that there is a world outside of their products and that sysadmins are having to test and apply patches from multiple vendors, often at the same time,” he says.

It’s difficult to forsee a time when this will ever change.

This feature was originally published in the Q2 2015 issue of Infosecurity – available free in print and digital formats to registered users

]]>

Fri, 15 May 2015 13:00:00 GMThttp://www.infosecurity-magazine.com/magazine-features/google-vs-microsoft-patch-wars/Nice and Easy Does it: ‘Back to Basics’ Hacking Methodologieshttp://www.infosecurity-magazine.com/magazine-features/back-to-basics-hacking/
We’re all looking for the next great threat to infrastructure, but there is still a host of simple attacks we should be guarding against, says Rene MillmanNice and Easy Does it: ‘Back to Basics’ Hacking Methodologies

We’re all looking for the next great threat to infrastructure, but there is still a host of simple attacks we should be guarding against, says Rene Millman

The security industry likes to think it is in an arms race with cyber-criminals. These hackers are busy dreaming up the next new way of breaking into infrastructure and, as an industry, we try to find ways to defend ourselves from ever more esoteric dangers.

To a large extent this is true, but this does sometimes mean we overlook many of the threats we think we have already overcome. In the same way we have come to think of diseases such as tuberculosis as being defeated (it hasn’t and it’s still a major problem in some parts of the world), basic hacking practices can still yield results for criminals intent on stealing from organizations.

It makes sense for hackers to try something easy first when looking to gain access to infrastructure. The quick wins for these criminals can sometimes be overlooked by firms when it comes to deploying an IT security strategy.

Finding the Easy Way In

The path of least resistance is attractive to criminals. “Hackers are beginning to realize that security measures are becoming increasingly sophisticated,” says Boudewijn Kiljan, EMEA chief technology officer at Wave Systems. “This is why we are seeing fewer ‘full frontal’ attacks, and more that seek to go in through a credible sidedoor, such as an enterprise employee.”

Kiljan adds that this type of attack is becoming more prevalent, as this ‘middle man’ provides the gateway to a world of useful and lucrative information.

Richard Braganza, senior consultant at security consultancy firm Context Information Security, says that, in his experience, the path of least resistance usually involves going for the low-hanging fruit – in other words, the easy pickings that effortlessly bypass defenses and go unnoticed.

He gives the example of WiFi. When organizations set up corporate wireless networks they will tend to use the strongest security provided by Microsoft, as the security fits nicely with Windows domains. “This may sometimes be a mistake,” says Braganza. “That is unless special measures are taken.”

On the face of it, this looks secure as you have to enter the same credentials to get onto the wireless network as you would use normally to access the rest of the corporate network.

As security measures become increasingly sophisticated, hackers are always looking for a backdoor left ajar

“It ticks all the boxes for users and IT to think the WiFi is secure. And therein lies the problem. This default use of Windows’ strong enterprise security for WiFi actually leaves the company completely wide open to anyone outside the building,” Braganza says. He adds that a user does not check who the WiFi network belongs to: “Anybody could set up a fake WiFi network with the same name and ask for user credentials. Once the attacker has the user credentials they can use them on the real WiFi network and, hey presto, they now have a foothold on the corporate network.”

Getting Social and Getting In

But it is not just the simple attacks on computers and networks we are still worried about; no amount of technology can adequately defend against social engineering attacks. Defcon ran a ‘social engineering’ capture the flag contest last year and the majority of ten major US companies targeted were happy to hand out information which would be useful reconnaissance for future attacks.

“A worst-case scenario to illustrate this would be the Target compromise where the refrigeration company they used was identified and targeted as a way into Target’s network, ultimately allowing hackers to breach point-of-sale terminals and steal details of millions of credit cards,” says Paul McEvatt, senior security architect at Fujitsu’s Security Operations Centre.

Social engineering has become the mainstay of modern cyber-attacks, whether directed en masse in phishing campaigns or specifically targeted in so-called ‘spear phishing’ attacks, according to Kevin O’Reilly, senior consultant at Context Information Security.

“The reason is that, as technology evolves and security holes in systems are closed, the weakest link in the chain remains the same: the human at the keyboard,” he says. “Piquing the curiosity or engendering trust with a carefully worded email is the most universally reliable way of eliciting the clicking of a link or the opening of a weaponized document leading to the installation of a malicious backdoor on a system.”

Malware writers may look to use web browser or email attack vectors – those that the enterprise has the least control over, says Kiljan. One of the most high profile cases of this type of attack was in relation to the RSA breach, which broke through the RSA’s SecureID token technology.

“The breach was caused by an email attachment, which was likely opened by an employee due to the promise of interesting information,” says Kiljan. Once the attachment was opened, it acted like a stepping-stone for malware to begin to infect the device and retrieve sensitive information. Upon creation of the connection, the hacker can gain remote access to all information stored or connected to the device and, at the end, to the IT infrastructure.

“This type of attack can occur in a distinct window, from when the vulnerable connection is first made to when developers can counter the attack with a counter-threat or patch,” adds Kiljan.

Something Old, Something New

Perhaps the main issue here is that while technology progresses in order to combat the latest threats, why can’t these protect against the more basic threats? Are the tighter defense mechanisms of more modern operating systems doing enough to deter cyber-criminals?

Modern operating systems have two problems they must deal with when trying to build a secure operating platform. Jeremy Demar, director of threat research at Damballa, says the first problem is that these operating systems are not completely new creations: “Even when a new version of your favorite one comes out it likely has a lot of code written decades ago,” he says.

The second issue is that users demand new features, functionality and backwards compatibility. “A good example of legacy code in modern systems is the Shellshock vulnerability,” says Demar. “This vulnerability first existed in code written in 1989.”

He adds that CVE-2014-6332 is a perfect example of what happens when you try to remain backwards compatible. “This exploit works on Internet Explorer from version 3 to 11; the code is only still around for backwards compatibility. Many times when an operating system tries to create proper defenses, users get upset and quickly find ways to turn it off. A good example of this one is the introduction of User Account Control (UAC) in Windows.”

Tackling the Problem

There is very much a need for security professionals to do more to tackle basic threats. Paul Glass, senior associate at international law firm Taylor Wessing says that, “To a degree, more can be done by security professionals, and the more sophisticated tools now available can prevent many low-level hacking approaches.” He adds that, “Risk assessment as to what data actually needs to be protected, and putting in place controls to achieve that without hampering the business, is key.”

All it takes is a well-constructed fake email; recipients can then be tricked into clicking bogus links

Glass says that these steps need to be accompanied by education and awareness of employees, as IT tools are only part of the picture: “For example, spoof emails that look as though they come from another employee but actually contain a malicious link are almost always easily identifiable by employees, but will often not be caught by IT tools. GCHQ estimates that about 80% of known attacks would be defeated by embedding basic security techniques, and education is a key part of that process.”

Criminals threaten both consumers and enterprises, and while both attacks are similar in that they involve persuading the end user to click on a link or open an email attachment, the difference is in the malicious payload.

“For consumers, it tends to be a more crude threat aimed at the masses with a low expectation of click-through for a backdoor trojan that is only sophisticated enough to do its basic job without creating too much noise,” says Context’s Kevin O’Reilly.

“For the enterprise, the lure is often slicker with a more intelligently worded email appearing more trustworthy or with more context, leading often to a more advanced exploitation method (perhaps a rare and therefore more valuable zero-day vulnerability in a web browser or plugin) and almost always leading to a more advanced backdoor threat.”

Glass says that, at the enterprise level, there is usually a bigger prize, so the time put into more advanced social engineering can be worth it for a hacker. He adds that, “This is particularly the case if the target has access to large volumes of personal data, valuable commercial assets, or large volumes of credit cards or bank accounts, and a hacker can remain undetected, slowly extracting data, over a long period of time.”

Glass says that education remains key at an enterprise level; also critical are access to data (restricting access to only those who really need it) and the ability to quickly identify when security is compromised and take action. “As Target found out, having advanced security tools is of little use if the IT security team doesn’t use those tools to identify a threat.”

This feature was originally published in the Q1 2015 issue of Infosecurity – available free in print and digital formats to registered users

There is an increasing landscape of risks facing well-connected businesses, and security practitioners must act now to mitigate them, explains Wendy M. Grossman

By the end of Laura Poitras’s documentary about the Snowden revelations, CitizenFour, Edward Snowden and Glenn Greenwald are so worried about surveillance that they sit side by side and write each other notes on paper sheets carefully shielded from the camera and discuss only in vague monosyllables. It’s not paranoia if they’re really spying on you.

The incoming constellation of technologies known as the internet of things (IoT) is bringing with it new security concerns that only a few years ago would have sounded like paranoia but are increasingly realistic. In November, a Reddit user posted a story about a boss’s computer that was infected with malware after plugging an infected e-cigarette into the USB port to recharge.

“The funny thing about it,” says Rik Ferguson, vice-president of security research for Trend Micro, “is that it’s not a new thing. Production line malware has been built into hardware like digital photo frames and others for many years now. The oldest example I could find was 2008.” What matters more, he says, “is how you manage any devices being connected.”

Lagging Behind

The underlying problem, says Adam Westbrooke, product director for UK-based Ovo Energy, is that manufacturers build to customer requirements: low cost, ease of use, and high functionality. Adding security tends to interfere with at least two of those – but the risks associated with breaches are wide: escalating attacks, unchecked access points to more complex systems, and hidden surveillance. He cites, for example, a recent survey of cheap tablets that found many are released with developer access still enabled and even spyware installed. All of these build on existing vulnerabilities.

The head data scientist for Massachusetts-based BitSight, Stuart Layton, says that one consequence of his company’s efforts to create objective ratings of the security effectiveness of other companies is that, “People are just beginning to realize how poorly their security has been configured up until now. We’re trying to tell people so they can make changes.”

At this early stage of the IoT, Layton is seeing what experts have been warning from the beginning: devices of all types accessible via the open internet – webcams in companies, printers, industrial-grade network switches, and mail servers, many with manufacturer-installed backdoors that are thoroughly documented in manuals that are easily accessible.

“What’s kind of alarming about the internet of things is that the technology industry, despite being in technology, has a pretty bad track record on maintaining the security of devices,” Layton explains. “Small companies won’t update, users won’t be aware how connected they are, and they pose a real security threat to themselves and anyone connected to them.”

The predicted number of devices that will be online-enabled around the world within five years is around 50bn

The key to change, he adds, is ensuring you know what your public-facing network looks like, reviewing policies governing traffic passing over your network, and regular self-scans to ensure nothing unexpected has been able to connect in or out.

Device or Data?

However, Ferguson suggests that focusing on the devices themselves – as in Black Hat talks – is to some extent misguided. Instead, he says, “What hackers are going after is the data,” which, he adds, means the cloud. “In large part, as a business you should be looking at how you mitigate the risk in the data center as well as the devices connected to the corporate network.” Even with something as personal as a heart rate monitor, the risks lie primarily in how the data is transferred, stored, and processed. “These are all data center questions,” Ferguson summarizes.

Despite that, some risks invoke the scene from CitizenFour. In mid-2014, the consultancy NCC Group demonstrated compromises of smart TVs and electronic hotel door locks. The researchers made three main points. Firstly, manufacturers assume that only other machines, not humans, will communicate with these devices and therefore security doesn’t matter. Secondly, manufacturers in the embedded world still think ‘security by obscurity’ is a reasonable strategy, forgetting that internal schematics and technical manuals including default passwords are all easily accessible on the internet. Thirdly, vulnerabilities present in devices when first deployed – such as the decision to run everything on some smart TVs as root – are likely to persist for years. Who patches a car – or a light bulb?

“I think you have to make the assumption that a lot of these devices should be untrusted and treat them accordingly,” says Rob Horton, NCC Group’s European managing director. He advises that, whenever a new device is connected, security practitioners should assess whether it introduces insecurities into the network that provide an entry point and what the impact of a compromise would be.

"As a business you should be looking at how you mitigate risk in the data center"Rik Ferguson, Trend Micro

“I know of a company where the video conferencing system would allow you to dial in and it would automatically pick up, but the TV wouldn’t necessarily turn on,” Horton explains. The resulting scenario was very like last year’s season-ending episode of the TV show The Good Wife, where a law firm gained the advantage over its opponents by listening in on an apparently disconnected conference room.

“If I were a cyber-criminal, would I target lots of different companies, or would I go for law firms? They’re aggregators of really sensitive information such as mergers and acquisitions. Your threat as a company comes from many different aspects.”

Escalating Risk

Despite the spooky nature of this complex and less predictable environment, Kim Larsen, a senior client executive for Verizon, argues it isn’t really new: “Machine-to-machine communication, which is the foundation of the internet of things, is something that’s been going on for a long time.”

On the other hand, he agrees that the IoT can exacerbate existing risks. One often cited data point in the 2014 annual Verizon Data Breach Report, for example, is that organizations commonly take up to six months to detect a data breach.

“In the internet of things environment that could be very bad,” Larsen says. Both manufacturers and purchasers of such systems therefore need to ensure that security is built in at the outset rather than applied afterwards. The desktop computer model – release and update – will not work in this environment. SCADA systems are a good example of what not to do; legacy systems newly connected must change their threat model.

Patching cars may well become routine in the IoT era

Larsen continues: “This is why the companies who do this for internet of things devices need to be very much aware that cyber-threats are a huge issue they need to mitigate from the beginning and not try to solve afterwards.” Among the risks he lists is manipulating sensor data in ways that damage the system – for example by allowing the water pressure to get too high, or creating power spikes and denial-of-service attacks.

Risks like these are beyond what most security practitioners are used to. As Piers Wilson, head of product management for Tier3, a Sydney-based company specializing in security monitoring solutions, puts it, “The effects will be real. The coffee machine will overheat, healthcare will stop monitoring, your car will stop. So the implications are going to be real things rather than just flows of data and credit card information. A part of the physical world will change.”

This is a particular problem for organizations where IoT technologies will be an integral part of delivering the business. These include healthcare, logistics, delivery services, education, and manufacturing, where incorporating sensors into existing automated production lines will be the next stage of development. The key for Wilson will be ensuring that systems are designed to deal with failure scenarios and that good monitoring will catch anomalous behavior that might indicate problems.

Security By Default

The ideal would be to build secure products, write secure software, and deploy secure systems. Decades of software development, however, has shown how difficult a proposition that is.

"95% patching sounds impressive, but 5% of one billion devices is a large number"Wil Rockall, KPMG

Wil Rockall, a director in the cybersecurity advisory team for KPMG, highlights this issue when he says that, “It would be a real shame if we went and, through lack of foresight, designed those systems to operate exactly the way we operate enterprise IT systems – inherently insecure products upon which we put layer upon layer of security products and then products on top of that to compensate for the weaknesses – rather than design them as inherently secure as we can.”

However, he adds, “There are always going to be bugs and problems. It’s hard to write really secure software, so we have a chance to really think about those things and do it intelligently rather than rush and blunder in with the same models.”

A complicating factor is the sheer volume of devices analysts expect will be deployed: we’re counting in billions. At that rate, the law of truly large numbers kicks in. As Rockall says: “95% patching sounds impressive, but 5% of one billion devices is a large number that makes it attractive to attack if you’re a criminal or a terrorist.”

A possible way to remedy that, he suggests, might be a legislative shift in allocating liability. “Who owns a piece of internet of things technology? Does the fridge manufacturer retain liability for all the things the fridge does? Do you have to pay for the two tons of yoghurt it orders?”

Frank Palermo, senior vice-president of the Millennial Solutions Group for Massachusetts-based IT services company Virtusa, favors being able to turn off or isolate misbehaving devices. In cars, for example, the entertainment system should not be hooked into safety-critical systems such as braking or steering.

Wilson notes that an added difficulty is that these technologies will arrive in the workplace without involvement or approval from the IT department, who are not the people historically tasked with buying items like coffee machines. “Technologies like that are not seen as IT projects.”

Taking control will be hard. Despite the risks, Trend Micro’s Ferguson warns that security practitioners will have no more success keeping connected devices out of the workplace than they did previous consumer technologies like mobile phones, tablets, or social networks. His advice: manage, rather than deny, their use.

“It’s an evolution for security departments,” he says. “Stop being the department of no; start being the department of how.”

This feature was originally published in the Q1 2015 issue of Infosecurity – available free in print and digital formats to registered users

]]>Tue, 07 Apr 2015 12:30:00 GMThttp://www.infosecurity-magazine.com/magazine-features/the-cyberthreat-of-things/Computer Says “No”: Will We Ever be Rid of DDoS Attacks?http://www.infosecurity-magazine.com/magazine-features/computer-says-no-ddos-attacks/
With DDoS attacks reportedly increasing in size and complexity in 2014, Phil Muncaster canvasses the industry on where the problems lie and how we can respondComputer Says “No”: Will We Ever be Rid of DDoS Attacks?

With DDoS attacks reportedly increasing in size and complexity in 2014, Phil Muncaster canvasses the industry on where the problems lie and how we can respond

The distributed denial of service (DDoS) attack has been on the CISO’s radar for years now. But 2014 saw a huge surge in attack size and volume, causing misery for organizations across the globe. In the third quarter of the year alone, DDoS prevention firm Akamai said it dealt with 17 attacks greater than 100Gb/s, with the biggest standing at a whopping 321Gb/s. With cyber-criminals constantly adapting new techniques to improve their effectiveness, what can organizations do in response? And what does 2015 hold in store?

What’s a DDoS?

At its most basic, a DDoS is an attempt by an attacker to overwhelm a targeted computer resource with a flood of traffic from multiple compromised computer systems – usually part of a bot. The distributed nature of the attack makes it difficult to stop those botnet machines without blocking legitimate traffic, resulting in a service outage for the victim and its customers, albeit usually temporary.

There are numerous different types of DDoS, but two of the most common are application layer attacks and infrastructure (or network) layer attacks. The former typically inundates a service with application calls, while the latter overloads a service by using up all of its bandwidth. Akamai’s stats reveal that the total number of attacks increased 22% from Q3 2013 to Q3 2014, with a 389% increase in attack bandwidth. However, while infrastructure layer attacks jumped by 43% over the period, app layer efforts decreased 44%.

Hitting Firms Where it Hurts

Before working out what level of response is needed, organizations need to understand why they’ve become a target, according to Quocirca director and analyst Bob Tarzey. “Launching a DDoS will not in itself make you any money as a cyber-criminal. There’s not an obvious way to monetize these attacks, apart from extortion,” he tells Infosecurity, adding that this is a relatively unfavored option compared to other illicit money-making schemes, given the time, effort and cost involved for cyber-criminals.

Arbor Networks’ Worldwide Infrastructure Security Report, released in 2014, has some interesting insights. It reveals that instead of criminal extortion (15%), DDoS attacks are most likely to be motivated by political or ideological disputes (40%). The rise of Anonymous has certainly had a major part to play here, and 2014 once again saw the online collective cause its fair share of outages – most notably in the #OpWorldCup blitz against FIFA World Cup sponsors.

DDoS outages cause more than just an IT headache for firms – reputation and revenue can also take a hit

It also emerged recently that potentially state-sponsored actors have been DDoS-ing pro-democracy Hong Kong sites such as that of the anti-Beijing paper Apple Daily.

Hong Kong saw a 111% rise in attacks from September to October 2014 as a result, according to Arbor Networks. Interestingly, 26% of attacks spotted were put down to criminals simply demonstrating their DDoS capabilities to potential customers. A further 18% were due to competitive rivalry between organizations, while 16% were launched merely as a diversion to enable a more serious data exfiltration attack.

The impact on organizations, of course, depends upon a variety of factors. A fleeting attack from Anonymous is not likely to have the same impact as a major, well-resourced campaign from a state-sponsored entity, for example.

However, for those organizations which make their livelihood from the internet – including online gaming, e-commerce sites, or even cloud service providers – it could lead to a worrying drop in earnings, negative publicity, and loss of customers to rival firms.

More Sophisticated?

Just as with the rest of the ever-evolving threat landscape, DDoS attackers are constantly changing their modus operandi to circumvent existing threat mitigation systems. To this end, 2014 first saw an explosion in NTP amplification attacks. This was signalled by a US-CERT warning in January which claimed attackers were exploiting a vulnerability in older versions of NTP servers to overwhelm victim systems with UDP traffic.

Incapsula research in March claimed to reveal a major shift towards this strategy, with attacks as big as 180Gb/s spotted. However, thanks to a concerted effort by organizations to patch and update their NTP servers, the attack methodology began to lose favor. In fact, NTP attacks dropped from 14% of all DDoS in Q1 to just 5% in Q3, according to Arbor Networks.

Yet as this strategy began to wane, so the cat-and-mouse game evolved again and so-called SSDP attacks grew, from just three known events in the whole of Q2 to a substantial 29,506 the following quarter. These attacks use source port 1900 and may be harder to stop with patching as they exploit a vulnerability in home CPE devices, which users typically do not get around to upgrading with newer firmware. Some 42% of all attacks greater than 10Gb/s used SSDP reflection during Q3 2014, according to Arbor.

"There’s not an obvious way to monetize these attacks, apart from extortion"Bob Tarzey, Quocirca

Attackers have now also begun to use public cloud infrastructure to launch DDoS campaigns. In July, researchers revealed that hackers were exploiting a vulnerability (CVE-2014-3120) in open source search engine Elasticsearch to break into Amazon EC2 virtual machines and launch their attacks using a new variant of the Linux DDoS trojan Mayday. In fact, new DDoS malware is a constant thorn in the side of those tasked with mitigating these attacks, especially as easy-to-use toolkits are becoming increasingly widely available.

Discovered in September, the Spike toolkit is the latest of these and is said to be able to build even bigger DDoS botnets by targeting a wider range of internet-enabled kit, such as routers and internet of things (IoT) devices.

Response Strategies

So how do we respond to the growth in DDoS attacks? Two main strategies are open to organizations, according to Bloor senior analyst Fran Howarth.

“One of the easiest ways to try to prevent a DDoS attack is to overprovision your infrastructure, especially those parts that are internet-facing. Organizations should also look to ensure that infrastructure is geographically widespread and that anycast, a technique that allows multiple servers to share the same IP address, is deployed,” she explains. “This, however, can be an expensive option and is not for everyone. An alternative is to subscribe to cloud-based services that will handle the traffic overload in the cloud before it even reaches your network.”

However, while there are certainly firms out there that can help, the industry as a whole has been slow to address the threat. “I still believe there is a long way to go,” argues Howarth. “There are many vendors and service providers with their own offerings, but I see little evidence of any co-ordinated, concerted effort to develop and standardize. We are, though, starting to see a greater emphasis placed on regulation.”

KPMG cyber security director, Jim Fox, believes all industry stakeholders can do their bit: “More can be done by sharing information in near real time on the nature of the attacks and a more co-ordinated response between target firms, internet service providers, security vendors and government,” he tells Infosecurity. “Ultimately, governments need to work to disrupt the organized crime groups undertaking those attacks, and that will require difficult and painstaking international action.”

Hacktivist groups such as Anonymous often attack targets with DDoS campaigns

What Lies in Store

So what of the future? More regulation is likely, according to Howarth. She says that under amendments to FFIEC rules, US financial institutions must now have DDoS mitigation technologies in place, although no individual tech was specified.

“With DDoS attacks on the rise, and increasing in complexity, size and sophistication, more organizations will be hit and more regulation is likely,” she says. “We are also seeing more co-ordinated attacks against specific industries, which is likely to continue.”

DDoS attackers will continue to evolve their methods to outwit the security vendors into 2015 and beyond, for example with SYN floods and application layer attacks. A report by DDoS mitigation firm Black Lotus also recently claimed that more countries, like Vietnam, India and Indonesia, would emerge as major sources of attack traffic thanks to their sheer number of infected endpoints, especially mobile phones.

According to KPMG’s Fox, cyber-criminals will “continue to find more and more obscure protocols and vulnerabilities to amplify the effect of their attacks. We may also see more aggressive or disruptive attacks which aim to take down target systems by directly exploiting security vulnerabilities – or target the routing infrastructure of the internet itself.”

It’s not all doom and gloom, though. For Quocirca’s Tarzey, there could come a time when all but the most sophisticated DDoS efforts can be dealt with by the majority of organizations: “I’d argue that anyone with a good spam filter never really sees any spam, but ten years ago it was a real problem. They’re still sending the spam out but we’ve got the problem under control. [In time] we’ll have the network locked down well enough to defend against the obvious DDoS attacks.”

Tarzey adds that, “There’s more at stake from being online today. It’s the reason why the industry is more focused on the issue. This is a good thing because it means the industry is responding. So watch this space.”

This feature was originally published in the Q1 2015 issue of Infosecurity – available free in print and digital formats to registered users

]]>Mon, 30 Mar 2015 13:03:00 GMThttp://www.infosecurity-magazine.com/magazine-features/computer-says-no-ddos-attacks/A Higher Lawhttp://www.infosecurity-magazine.com/magazine-features/a-higher-law/
It is not wisdom, but authority, that makes a law, the saying goes. Perhaps that’s why international cybersecurity laws are so lacking, says Danny BradburyA Higher Law

It is not wisdom, but authority, that makes a law, the saying goes. Perhaps that’s why international cybersecurity laws are so lacking, says Danny Bradbury

December 2014 was a big month for cybersecurity in Canada. The country passed legislation allowing it to ratify an international treaty on cybercrime a mere 13 years after it was first unveiled.

Things tend to move like molasses in the world of international law. That’s a sticky problem for those tasked with bringing cyber-criminals to justice. With online thieves and exploiters often operating outside the legal jurisdiction of their victims, countries must work together to protect themselves against global threats. But is international cybersecurity law strong enough to bring criminals to justice? And if not, then what other work needs to be done?

The Law in Europe

There are several European directives that touch on areas of cybersecurity. The 2002 E-Privacy Directive requires European electronic comms companies to report data breaches, while the 2008 European Critical Infrastructures Directive states that critical infrastructure service providers must put electronic protections in place. The 1995 Data Protection Directive decrees that data controllers must adequately protect personal data, but this legislation will be superseded by the General Data Protection Regulation, which is expected to be adopted in 2015.

These directives have created some international consistency in cybersecurity across the EU, but far more work is needed. EU lawmakers admitted this in a preamble to proposed legislation now in the final stages of becoming law, which they hope will tie network information security law together into something more cohesive.

In March 2014, the EU voted through a Network and Information Security (NIS) Directive. Originally proposed a year earlier, the directive was designed to enforce an EU-wide cybersecurity strategy created at the same time. The proposal states that, “Existing NIS capabilities and mechanisms are simply insufficient to keep pace with the fast-changing landscape of threats and to ensure a common high level of protection in all the member states.”

The directive, which at the time of writing was still being thrashed out by stakeholders, would force affected companies to report security breaches with a significant impact. Also included were instructions for national regulators to co-operate by providing early warnings to each other on cybersecurity risks, and for secure channels for sharing sensitive information.

The big debate at the end of 2014 was about whether the directive should affect only operators of critical national infrastructure, or whether providers of information services, such as social network and e-commerce companies, should also be affected.

2015 should see the passing of the new EU General Data Protection Regulation

The Budapest Convention

The directive will move the EU further towards a consolidated approach to cybersecurity, but only one international treaty currently addresses cybercrime more directly. The European Convention on Cybercrime, also known as the Budapest Convention, defines tactical operations for fighting cybercrime on a global basis. It is this legislation that Canada is now able to ratify, which puts the North American country on a list including the US, France, and Germany.

Published in 2001, the Budapest Convention was a long time coming. Its roots date back to 1976, when participants at the Council of Europe Conference on Criminological Aspects of Economic Crime in Strasbourg discussed ways to define cybercrime. This was a forward-thinking crowd: the wooden-cased Apple 1, one of the first home computers with an actual keyboard, shipped that year.

Little happened then until 1989, the year when the fundamental language of the web, HTTP, was created. The Council of Europe created its own list of recommendations for punishable cybercrime acts, which was adopted the following year. Seven years later, the Council began negotiations on the European Convention on Cybercrime. Introduced in 2001, it finally came into effect in 2004.

Assessing the Law

So does the Budapest treaty do its job? There are still notable problems prosecuting cyber-criminals, warns Steve Durbin, managing director of the Information Security Forum (ISF): “If you’re talking about cybercrime, and you talk to Interpol, they’ll describe an immense amount of frustration on their part in tracking perpetrators down, and then doing something with them if they do find them,” he says. “So, it’s about the willingness of nation states to observe and collaborate and prosecute cyber-criminals.”

Countries that ratify the Convention agree to implement its policies in domestic law. These policies span key areas: fraud and forgery, child pornography, copyright infringements, and security breaches. However, neither Russia nor China – two large sources of cyber-criminal activity – have signed or ratified the treaty.

Still, there have been some significant wins, thanks in part to co-operation facilitated by the Budapest treaty. In November, law enforcement units from the US and over a dozen countries arrested 17 individuals in a bust targeting black markets operated via the Tor network.

Ulf Bergström, head of communications and external relations at Eurojust, the judicial co-operation unit, argues that international agreements are crucial when targeting cybercrime with operations like these: ‘’It is paramount in cybercrime to involve the judicial authorities, prosecutors and investigations from the start to ensure that evidence is gathered in a way so that it is admissible later in court.

“You must also sort out where you will prosecute, as this is a cross-border operation; at the same time, you must balance the citizen’s rights,” he continues. “So, clearly, without justice, there will be no success in fighting crime.’’

“Existing NIS capabilities and mechanisms are simply insufficient to keep pace with the fast-changing landscape of threats"Proposal for EU Network and Information Security Directive, 2014

Military Activity

Commercial cybercrime isn’t the only thing that international law must consider, though, according to legal experts. Military attack and defense is becoming an increasingly important part of the equation.

“There is real doubt about whether the Convention reaches sovereign state activity, and as we know, this is the major area of great concern to those of us that want a peaceful, conclusive and fair cyberspace,” argues Mary Ellen O’Connell, professor of international dispute resolution at the University of Notre Dame’s Kroc Institute for International Peace Studies.

O’Connell is particularly concerned about the use of Stuxnet, the virus that disrupted operations at the Iranian Natanz nuclear facility, now believed to have been a US/Israeli project: “I am also concerned about how the Chinese are using the internet for military advantage,” she warns.

If Budapest doesn’t shed legal light on these kinds of state-sponsored cyber-activities, then what does? Robert Clark, an attorney in cybersecurity and privacy law at the US Military Academy’s Army Cyber Institute, says that the Pentagon has mapped laws used in conventional warfare to the cyber domain. He refers to the Law of Armed Conflict (LOAC), which draws on treaties such as the Geneva Convention and Hague Regulations for warfare, and covers basic principles such as proportionality and military necessity.

In a 2011 report to Congress on cybersecurity defense policy, the Pentagon called for the inclusion of LOAC as part of a strategy including “the use of all necessary means” to defend its interests in cyberspace. “LOAC is just as adequate as it is for the other domains: land, air, sea and space. In all these domains, including cyber, LOAC can be easy, hard, and everything in-between,” says Clark.

O’Connell argues for a binding treaty on nation state engagement specifically for cyberspace, governed by an independent body like the International Telecommunications Union (ITU), with its rich understanding of the internet. Clark is unconvinced: “China and Russia were leading the pack to come up with a cyber-treaty convention, and the reason we objected is because it also went to our core basics of freedom of information and freedom of speech. They wanted to include aspects of regulating and suppressing freedom of speech as part of this core cyber convention.”

Instead of a pervasive treaty on cyberspace engagement, the US has moved to bilateral talks, but these have been difficult. Direct discussions with China on cybersecurity recently stalled after five members of the Chinese military were indicted on hacking charges in the US. Meanwhile, Russia and China have been working towards signing a bilateral treaty on cyberspace engagement rules.

Neither Russia nor China have signed or ratified the Budapest Convention

State-Sponsored Theft

Part of the problem with China is the high instance of IP theft alleged to be emanating from that country. The five aforementioned Chinese military officers were accused of hacking firms including Westinghouse Electric, US Steel Corp, and SolarWorld, in an effort that US attorney general Eric Holder said was designed to advance the interests of Chinese state-owned firms.

When states sponsor or organize the theft of corporate secrets, that is classed as espionage, Clark points out, arguing that it isn’t illegal internationally. Countries normally prosecute such activities under domestic law. That’s a useful tactic when the spies reside in your cities, he points out, but less so when they’re a continent away, doing it via keyboard.

The only other option is to address it privately, says Gregory Nojeim, director of the Freedom, Security and Technology Project at the Center for Democracy and Technology: “Sometimes it becomes a diplomatic issue, in which case the relevant officials will be raising the matter with the foreign governments,” he explains. “I would imagine that sometimes the State Department raises it with foreign ambassadors here.”

Like purely commercial online criminal behavior, state-sponsored activities are developing at breakneck speed. Politics moves more slowly, especially when multiple countries with different agendas are all working on the same treaty. For now, it seems that most of the meaningful discussion around state-sponsored cyberspace activity is happening as many hacking operations do: behind closed doors, in secret.

This feature was originally published in the Q1 2015 issue of Infosecurity – available free in print and digital formats to registered users

]]>Tue, 24 Mar 2015 11:00:00 GMThttp://www.infosecurity-magazine.com/magazine-features/a-higher-law/Cryptowars 2.0 and the Path to Ubiquitous Encryptionhttp://www.infosecurity-magazine.com/magazine-features/cryptowars-2-path-to-ubiquitous/
As government and technology companies square up once again over encryption, Tom Fox-Brewster reports from the frontline of the Cryptowars’ second comingCryptowars 2.0 and the Path to Ubiquitous Encryption

As government and technology companies square up once again over encryption, Tom Fox-Brewster reports from the frontline of the Cryptowars’ second coming

Privacy is dead and we all need to deal with it. That statement was a shibboleth of those who would directly benefit from saying it, according to Jon Callas, world renowned cryptographer and co-founder of secure smartphone maker Blackphone. It’s simply not true, he adds.

Who would benefit from a world with no privacy? Intelligence agencies and companies that trade people’s data without affected individuals knowing are two obvious examples. But the tide is turning against them. The rise of cheap and widespread encryption across the web and internet-enabled communications has, in fact, pointed to a world where online privacy might be ubiquitous.

Watershed Moment

When historians look back at 2014, they’ll likely see it as the year when this movement gained proper momentum. Just recently, WhatsApp added end-to-end encryption to its massively popular messaging service thanks to a collaboration with Open Whisper Systems, which had already created the much-respected TextSecure and RedPhone apps for private communications. Companies like Silent Circle and Blackphone have pushed on, trying to create financially viable businesses with their encrypted comms offerings. Much-used content delivery network CloudFlare decided to enable Secure Sockets Layer (SSL) web encryption across the sites it served, whilst notable tech experts like Chris Soghoian have been pushing for SSL across every website on the planet. Apple and Google, meanwhile, announced their respective mobile operating systems would encrypt users’ data by default.

It was the actions of those two tech giants that irked law enforcement in America the most, however. FBI director James Comey told media he was concerned that Apple and Google were marketing a technology that would “allow people to place themselves beyond the law.” Added to the Edward Snowden documents that revealed various attempts by US and UK intelligence agencies to break much-used cryptography, Comey’s comments made it apparent that certain corners of government were willing to fight against widespread encryption. Privacy advocates the world over looked on dumbfounded. They felt it was a sign: Cryptowars 2.0 had begun.

Edward Snowden's famous leaks dragged Cryptowars back into the limelight

Going Underground

The original Cryptowars, according to the account of Ross Anderson, professor of security engineering at the University of Cambridge, lasted roughly from 1993 till 2000. President Clinton was persuaded by the National Security Agency (NSA) to try to grab everyone’s encryption keys, says Anderson: “We all fought back, from NGOs to Microsoft, and the policy was abandoned while Al Gore was trying to get elected. We thought we’d won, but it just went underground, as Snowden told us.”

He points to one Snowden revelation in particular, the NSA decryption program known as BULLRUN, which has been covertly compromising cryptography in various ways. For starters, the NSA had spent at least $250 million on influencing companies’ technical designs to try to ensure it could crack their protections, while GCHQ had explored ways to get access to Hotmail, Google, Yahoo and Facebook traffic. The NSA had also set up a ten-year program solely designed to crack encryption.

But the tech industry’s response hasn’t been to bow down to intelligence agencies. Instead, it has only bolstered encryption, hence the rush to push out end-to-end protected systems. And what they’re doing is wholly legal, which leaves law enforcement with one of the toughest questions it has ever had to answer: how does it legally get access to data when users have total control over protections around their information?

"We thought we’d won, but it just went underground"Ross Anderson, University of Cambridge

Various countries are trying to pass access laws which would compel service firms – whether internet service providers like BT and Virgin, or internet firms like Facebook and Google – to do everything that’s demanded of them. In the UK there’s the Data Retention and Investigatory Powers Act 2014, which is heading for a judicial review after concerns were raised that the government had extended its powers to reach into foreign data centers and into webmail services such as Gmail.

But there are legal contradictions that police have to cope with and that bemuse critics of surveillance. For instance, privacy laws in the UK demand that firms should not hand over information on their own nationals to anyone outside the country, unless they have proven their ability to protect data. The Information Commissioner’s Office has been demanding properly implemented encryption from private and public organizations. In the US, various laws, such as the Sarbanes-Oxley Act, require decent data protection. So on the one hand, governments are demanding encryption, whilst on the other they want easy access to data. As Callas notes: “There is no such thing as ‘Government.’”

The Rise and Rise of Encryption

With such apparent paradoxes and with various forces fighting their corners, how might the second Cryptowars be settled? The cryptographers certainly won’t be backing down. Callas, who was also involved in the Dark Mail bid to create highly secure email, says crypto designers have to create systems that “actually work; they have to be effective.” It’s their raison d’être. “We’re in the job of protecting people’s communications because there are gazillions of people who have the right to talk to people,” Callas adds. “They have business needs and personal needs and I believe they have a fundamental human right to defend themselves.”

The tech companies will continue to improve encryption too, partly as part of a PR campaign in response to the Snowden leaks, but also because they are keen to place control of data into the hands of users so they don’t have to make decisions on whether to work alongside governments. The technology itself will distance them from intelligence agencies as it’ll prevent them accessing any data directly, though there are still some weaknesses that could allow them to access users’ communications.

From a technical perspective, though they will likely continue to break encryption, intelligence agencies don’t have to spend all their resources subverting cryptography. They could, and have, sought to get to data before it’s encrypted.

GCHQ has long sought to intercept traffic from major tech giants

“It’s easier, for example, to wait until a message has arrived and is decrypted by the recipient in order to read it rather than try to decode the message yourself. Don’t think Bletchley Park, where messages were plucked out of the ether and decrypted; think of the technology you use in front of you being subverted to read your communications,” says Professor Alan Woodward, a security expert and a visiting professor of the Department of Computing at the University of Surrey. It’s believed GCHQ infected Belgian telecoms giant Belgacom with the Regin malware partly because it wanted to get at communications before they were turned into completely garbled nonsense.

The Long Arm of the Law

Law enforcement still has certain laws on its side if it does want to break encryption, even if they’re limited. The UK Regulation of Investigatory Powers Act does allow law enforcement to demand that a suspect decrypts anything that has been seized, though it might be tricky to get certain sticklers to comply. In the US, citizens have claimed their Fifth Amendment rights, which protects against unfair treatment in legal processes, when such demands were made.

As subverting technologies becomes increasingly difficult and citizens can either flout the law or use it to their advantage, governments will likely have to rethink their strategy. Anderson wants to end the Cryptowars 2.0 early with a new treaty about law enforcement wiretapping that would let police forces in signatory states get access to communications data and content in other signatory states, with a number of safeguards. These would include judicial warrants, where an independent person has assessed the case and found probable cause for further investigation, rather than relying on a minister or intelligence agent to make the call.

There also needs to be transparency, including the eventual disclosure of all warrants after a fixed period of time, or when the suspect is charged or case dropped, says Anderson. There should also be jurisdiction, so that countries have to go through another’s legal system if they want to get at data outside of their borders, he adds.

“These Cryptowars are probably un-winnable by either side, if there are actually any clear ‘sides’ in this debate"Professor Keith Martin, Royal Holloway

These are sensible suggestions, but some see no end to the back and forth between tech companies and global governments.

“These Cryptowars are probably un-winnable by either side, if there are actually any clear ‘sides’ in this debate – the battleground just continues to move around,” says Professor Keith Martin, director of the Information Security Group at Royal Holloway.

Wherever individuals stand on the issue, they should remember not to place all their faith in encryption to protect their privacy. Just look at the many SSL weaknesses that received so much press last year, from the Heartbleed vulnerability to the Poodle flaw.

“It is certainly the case that there are more encryption products and services around. However, it is important to realize that encryption has its limitations. It is very good at making data unreadable while it is stored and/or communicated across a channel. But when that data is actually used, it normally needs to be decrypted and then exists in a readable state. Thus, use of encryption certainly makes it harder to access data – but it does not make it impossible to access,” concludes Martin.

This feature was originally published in the Q1 2015 issue of Infosecurity – available free in print and digital formats to registered users