Wednesday, September 5, 2012

How will your company respond if an incident does happen? Spiezle offered the following advice on developing a strong plan for acting in the wake of a data breach.
1. What Data Do You Have? The first step is to fully understand the kinds of customer information your company is handling and storing--and why. It might sound obvious, but according to Spiezle, breaches often expose how little an organization knows about its data. "I've gone through a lot of breach responses with companies where people are literally sitting around a table saying 'I had no idea we were doing that,'" Spiezle said. That can exponentially complicate matters when a data-loss event occurs--you can't very well determine the consequences and communicate them appropriately if you don't know what was at stake in the first place. Assess the kinds of data you have, who has access to it, and why.
The general rule of thumb: Limit access to those who need it for legitimate business reasons. Put a particularly high burden of proof on the case for storing sensitive customer information on laptops, external drives, mobile devices, and other hardware that can be easily lost or stolen.
2. What Are Your Regulatory Requirements? Spiezle is quick to note that this is one of the toughest data-breach challenges for SMBs that lack a compliance officer--much less an entire compliance staff--or that rely on IT generalists rather than information security specialists. But your regulatory requirements will dictate what you must do in data-breach scenarios. These are defined by the likes of HIPAA or PCI, but Spiezle noted that 46 states also now have some form of reporting requirements.
Alas, while there are vendors that can help, there's no central online destination for companies to assess all of their compliance requirements. Spiezle thinks federal legislation could help. "It is a very complex issue, and [it] again underscores the importance of pre-planning," he said.
Bonus advice: Be proactive. If you do seek help from a vendor, Spiezle pointed out that it's much better to do this when you don't already have a problem--it's tough to get the best terms if you're negotiating at 3 a.m. on a Saturday after a breach has already occurred.
3. Who Will You Notify? Knowing who you'll need to communicate with can help lead to faster, more effective responses to data-loss events. Identify those groups before something goes wrong. "This might be partners, customers, [or] government agencies," Spiezle said. He noted that some companies develop relationships with appropriate law enforcement agencies in advance so that they know the proper people to contact in the event of a data breach. Consider it the business equivalent of keeping a list of emergency contact numbers near your home phone.
4. When Will You Notify Them? This is a tricky and much-debated area: How soon should you notify affected customers and other stakeholders? Spiezle said it's a case-by-case decision. With law enforcement or other government agencies, it's usually an ASAP scenario. Customers and partners are a tougher call. On the one hand, Spiezle said, you don't want them to find out from the media or other external sources. On the other hand, you don't want to make things worse by communicating inaccurate information, which can happen if you act too quickly. Some of this decision may be guided the regulatory requirements your company operates under, too. Rule of thumb: Communicate as quickly as possible without sacrificing the clarity and accuracy of the information you provide.
5. What Will You Say? One way to cut down your response time and outreach efforts: Prepare your customer and other external communications in advance. This gets back to the importance of Tip 1--it's tough to accurately message a breach if you don't know what data you had in the first place. If you've got a complete understanding of your information and how you handle it, you can develop solid communications templates in advance.

1. The average number of serious* vulnerabilities found per website per year was 79, a
significant reduction from 230 in 2010 and down from 1,111 in 2007.

2. Cross-Site Scripting reclaimed its title as the most prevalent website vulnerability, identified in 55% of websites.

3. Web Application Firewalls could have helped mitigate the risk of at least 71% of all custom Web application vulnerabilities identified.

4.There was notable improvement across all verticals, but Banking websites possessed the fewest amount of security issues of any industry with an average of 17 serious* vulnerabilities identified per website.

5. Serious* vulnerabilities were fixed in an average of 38 days or faster, a vast improvement over the 116 days it took during 2010.

6. The overall percentage of serious* vulnerabilities that were fixed was 63%, up from 53% in 2010, and a marked improvement from 2007 when it was just 35%. A rough 7% average improvement per year over each of the last four years.

7. The higher severity that a vulnerability has, the higher the likelihood that the vulnerability will reopen. Urgent: 23%, Critical: 22%, High: 15%.

8. The average number of days a website was exposed to at least one serious* vulnerability improved slightly to 231 days in 2011, from 233 days in 2010. Find full report here

Saturday, June 30, 2012

When it comes to investing in network security, there are three types of IT philosophies.

"There are the ones that value technology and see it as a strategic advantage in their environment, and they'll invest heavily in it. There are the ones that know they need it and they're willing to invest where they need to," says Rick Norberg, president of Atrion Networking SMB, an IT service provider. "And then there are the ones that just see it as the cost of doing business. And those are the ones that tend to be unprotected, unmanaged and dedicate inadequate staff resources in order to plan through security."

Don't get pegged in that third group, Norberg warns. According to Norberg and several other IT experts, there are a number of ways to revamp your thinking and your network design for better IT functionality and improved security. Here's where they say to start.

Build Backward from Mandates

According to Norberg, before designing your network it's important to take a step back and think about a couple of critical variables, including:

What vertical you operate in;
What compliance mandates you answer to;
Where you want technology to take the company in the next three years.
Then design back from there, he suggests. When taken into consideration early in the design process, these elements should have significant bearing on the choices you make in infrastructure and deployment options.

"Sometimes, people will just buy cheap switches, network gear, firewalls and things like that because they're inexpensive. And they throw them in," says Norberg. "Then when they have a breach, they realize they just paid a zillion dollars to the government or to a credit card company or something like that in order to remediate it. And then they have to go buy the more expensive gear anyway. Taking an 'it can't happen to me' approach is probably not the best way to design a system."

Know Where Data Sits

One of the biggest weaknesses of many organizations is the lack of visibility into where exactly important data sits on the network.

Scott Laliberte, managing director at global business consulting and auditing firm Protiviti, says, "Among the things that clients we are working with are spending more time on is not only data leakage prevention--making sure it doesn't go out on the front end--but also what I call 'data discovery,' which is being more confident and clear on where the data for sensitive information really does reside and then organizing it in such a way that you can manage it in a segmented way."

According to a Protiviti survey earlier this year, organizations still struggle with data discovery and classification--just 50% of respondents said they have a specific plan in place to categorize data. And according to Laliberte, when he engages with clients to do data discovery on their network for the first time, surprises are common.

"In almost every instance there is a surprise found by the client as to where some of the sensitive data is," he says.

Next: The Importance of Modularity, Firewalls and Patches

Modularity Is the Name of the Game

The more modular you can design a network, the easier it is to control and monitor traffic, according to Norberg.

"You want a network that you're able to functionally monitor and secure, so you're controlling the traffic on the network. You want one that can grow with the users," he says. "A lot of times, you start with a flat network and then you start to modularize the phone traffic, the PC traffic and, if they're in a retail environment, some of the POS terminals to make sure they're secure and separated from each other. And then you want to get more granular from there."

When done efficiently, network segmentation and modularity give a lot more flexibility in prioritizing risky segments of the network so you can focus your monitoring and security efforts on the most critical areas rather than having to worry about all of the infrastructure in aggregate. That's a step up from what most organizations are used to, says Norberg.

"Traditionally, you might just slap a firewall into there and when it goes down, the customer calls you," he says. "These days, we're actually looking at the logs and doing proactive monitoring on the devices to make sure that they're not only secured and updated with the latest firmware, but you're also looking at what's happening with the firewall and the connection itself."

Manage Firewalls More Intelligently

Speaking of firewalls, organizations have to take an active management approach to their firewall rules if they're going to get the most out of these assets. With most enterprises today depending on thousands of firewalls dispersed throughout their network fabric, firewall management has become an important element both for efficient IT operations and effective IT security.

Beaver says that, all too often, he sees organizations that believe that their security is OK. However, once he starts digging into their firewall rule sets and configurations, security holes are discovered.

"[We find] system configuration problems, weak passwords, network segments that shouldn't be talking to one another, ports that are open," he says. "I often see database servers that are sitting out on the public Internet wide open for attack."

Patch

Patch management isn't just for endpoints. Smart organizations need to have utilities in place that can automate system patching across all IT infrastructure.

"If I'm the IT director for the company, I want to make sure I'm using every tool capable of doing updating firmware and software on an immediate basis and alerting and reporting on it," says Norberg. "Generally, you want to buy a third-party product that's capable of doing more than just one particular manufacturer. Otherwise, you run into problems where you've got some of this gear, some of that gear, some of these servers, and then you end up spending a lot of your time not being very efficient in the way you're patching things."
source

Monday, June 4, 2012

Apple has introduced a guide to iOS security, which was posted to Apple.com sometime in late May, but is just now being noticed outside the Apple developer community. The publication is notable because it’s the first time Apple has published a comprehensive guide intended for an I.T. audience. (Apple’s developer-friendly documentation on security matters iseasy to spot, however).

The new guide includes four sections dedicated to topics like system architecture, encryption and data protection, network security, and device access.

In reading the introduction, it’s clear that the guide’s intention is to better help corporate I.T. understand the security environment with iOS devices, including iPhones, iPod Touches, and iPads. It’s important that these details are documented in language I.T. understands as more and more businesses allow personal devices on their network and implement their own BYOD (bring your own device) programs.

To this point, the report begins:

“Apple designed the iOS platform with security at its core. Keeping information secure on mobile devices is critical for any user, whether they’re accessing corporate or customer information or storing personal photos, banking information, and addresses….
For organizations considering the security of iOS devices, it is helpful to understand how the built-in security features work together to provide a secure mobile computing platform.”

While some may imagine the guide to be an example of Apple’s increasing openness (on matters not related to new products, that is…), much of the information contained in the guide is not new at this point in time. It has simply been repackaged for a different audience.

However, detailed in the guide are things like how the code-signing process works and ASLR (address space layout randomization) works in iOS, which had previously been outed by security researchers prior to Apple’s reveal.

Another I.T.-friendly tidbit includes a list of items which administrators can restrict using configuration profiles within their Mobile Device Management solution. For example, Siri (as IBM recently did), plus FaceTime, the camera, screen capture, app installs, in-app purchases, Game Center, YouTube, pop-ups, cookies and more. Users may have more freedom of choice in terms of devices they use for work than in years past, but corporate I.T. is now adapting so it can deliver the same level of protection it once did it the BES/BlackBerry era…or, as an end user might tell you – the same level of lockdown. (What, no YouTube at work? No fair.)

1) No security plan is foolproof. Comforting, isn't it? But it's true--there is no such thing as 100% secure, and I've yet to an encounter a security pro that would argue otherwise. (Some governments in the Middle East would likely agree now, too.) That's not an excuse to do nothing. When online crooks target SMBs, either via targeted attacks or indiscriminate malware, they usually do so for two reasons: SMBs have more money than the average individual, and they have less security in place than large enterprises. That can make them easy, profitable targets. The SMB's job: don't be an easy mark. Practice good basic security at bare minimum. If time and money are key challenges, consider a risk-management approach--more on that below in number five.

2) You might not know it if you're infected. Flame's just now coming to light, but it has existed since 2010--and possibly as far back as 2007. Even if you've got strong security controls in place, you might not necessarily know if you've been infected by malware or other means. "Most malware is written to be very stealth and not let you know that it's on the machine, so what Flame does is very typical," Haley said. Robust, current security technology is a good first step toward minimizing the chance of undetected breaches--the straightforward anti-virus programs of yore aren't likely to cut it. Haley also advises SMBs take steps to eliminate spam in their corporate email accounts; the bane of inboxes continues to be a favorite delivery method for malware makers. Expect social media to continue to grow as a malware vector, too. Haley thinks SMBs need to be thinking about social risk and actively monitoring their accounts for unusual activity.

3) Attacks are increasingly sophisticated. The complexity of today's security threats almost make you long for the good old days of the Wazzu virus. Flame appears to have reset the bar. For SMBs, it's a reminder that a set-it-and-forget security plan is a recipe for failure. What worked in 2010 probably won't pass muster in 2012. "You really need to review everything [periodically]," Haley said. That's important even if you outsource security to a consultant or other vendor. If time is an issue, an annual review is better than none at all. Depending on how much a particular company invests in security--or doesn't--it might want to consider more frequent checks on its technologies and processes to ensure it's keeping up with the times.

4) Reputation harm can be expensive. The fallout from the Flame revelation is just getting started, but it's safe to say this is a public embarrassment for the affected governments. For SMBs, it's a reminder that security breaches don't necessarily need to hit your bank account to be costly. A website that gets co-opted into a malware host, for example--they're at an all-time high, according Symantec's most recent annual security report--could have a difficult time earning back the trust of its customers and other visitors. Likewise, data theft can be both embarrassing and expensive.

"It's bad enough if you get your money or your customer list or some sort of intellectual property stolen," Haley said. "But also the damage of the publicity from it could be really crippling to a business. Some people may be reluctant to do business with you if they think that you can't keep your information secure."

5) Prioritize your most important assets. A sound strategy for some SMBs is simply to not try to protect everything. Rather, identify your most valuable assets--banking credentials and other financial information, customer databases, and intellectual property, to name a few examples--and focus your efforts there. That can help resource-strapped organizations minimize their vulnerabilities in a practical manner rather than waving a white flag of surrender.

Thursday, May 31, 2012

Security is one of (if not THE) top concern companies and users have with cloud computing. The issue of cloud security, however, is much more complex than simply “is the cloud secure or not”. A cloud-based application can be hosted in a secure environment, with properly encrypted data and everything, and an attacker can still get access to your information through social engineering. On the other hand, you can have the most secure password policies in the world, but if the hosting environment gets hacked, you are still going to lose your data.

Any proper solution that tries to address the cloud security issues that exist today must take into account the three sides of the security issue: technology, processes, and responsibility. Another important factor to take into account is that the details and the importance of each one of these, relative to the others, change according to where in the cloud stack we are. Building secure cloud software is very different from security at the cloud platform level, and from secure infrastructure as well.

Technology

The first step is to employ the proper technology to secure applications and data. “Proper technology” varies widely depending on what layer of the cloud we are talking about. For cloud applications, security can be as simple as deploying proper security certificates and encryption. All sensitive information needs to be properly encrypted, so that even if an attacker gains access to your systems, any data that gets stolen will still need to be decrypted to be gotten at. And it’s not enough to simply encrypt passwords: if you know that people commonly employ their birthdays as passwords, encrypt that as well. As much as possible, technology should protect users from themselves without inconveniencing them.

A very interesting solution in this space is Porticor’s Virtual Private Data. It’s basically an encryption layer that sits transparently on top of any cloud data store, performing dynamic data encryption/decryption as data gets accessed. I recommend that anyone interested in securing cloud applications take a look at their solution.

On the lower layers of the cloud stack, security is much the same as it was before the cloud. Cloud platforms need to be secure just as operating systems are secured, avoiding malicious code from taking over other execution sessions or stealing data, and so on. In the infrastructure layer, security is both about maintaining a secure virtualization environment and about physical security. Fortunately, most top-tier cloud infrastructure providers already are very security minded, reducing risks on this side.

Process

All the technology in the world can’t save you if an attacker can call your receptionist and get her to install malware on your corporate network using her network administrator password. This is as true for the cloud as it is for private networks, and while something like this probably wouldn’t happen at a large enterprise, there is a surprisingly large number of small- and medium-size businesses where it just might.

If a company is deploying a Windows cloud server from Rackspace, for instance, it will come with a pretty complex password, automatic updates enabled, firewall-activated, and so on. Many times, though, the first step that people take is to change the password to something easier to remember – usually “password”, or “Pass1234” because a secure password must always include capital letters and numbers – and create an unprotected FTP tunnel to that server, “just to copy a few things”. What started as a reasonably secure server is now a security breach waiting to happen. It’s not enough to have the proper security tools. Companies need to build processes that actually put those tools to use.

Companies also underestimate the power of having proper information security policies communicated to all employees. When everyone in the company is security conscious, proper security comes much easier. The process side of security doesn’t start with technical processes, but with people, so proper and constant communication is fundamental.

Responsibility

So far, the two aspects we explored are pretty standard. While cloud applications need to be much more security conscious than traditional in-house applications, the technology needed to deploy the extra security is pretty standard. The same thing goes for securing cloud servers. The greatest differences between cloud security and traditional security lie in the matter of responsibility.

When a company deploys traditional software, IT knows its responsibilities. The software is inside the data centers it operates and controls, and anything that happens – data being stolen, servers being hacked, and so on – is their responsibility. Since IT has full control over the environment, they are comfortable with taking on the burdens that come with this control.

When things are moved to the cloud, however, IT departments lose control over the environment. It is understandable, then, that they are unwilling to take responsibility for problems that might happen. Having clearly separated responsibilities helps: hosting providers need to ensure the security of the underlying platform (virtualization layer, physical security, and so on). The rest would fall to the customers. But it is not enough. Providers need to offer guarantees in case something happens, and understand where internal IT departments are coming from, to improve relations and reduce their concerns.

All together

These three perspectives need to be taken into account together, or we run the risk of creating an even more complex environment than what already exists. In some ways, the cloud has the potential to make things more secure, by providing incentives or automating the management of common security tasks that many small businesses forget about. On the other hand, the concentration of data in the hands of a few service providers can make for very attractive targets, increasing the responsibility of these companies. No technology, process, or contract can, alone, remove the security concerns over the cloud; and everyone that has concerns about the cloud should look at the whole security package, and not technology or processes alone.

Known by the names Flame, Flamer, and sKyWIper, the malware is significantly more complex then either Stuxnet or Duqu -- and it appears to be targeting the same part of the world, namely the Middle East.

Preliminary reports from various security researchers indicate that Flame likely is a cyberwarfare weapon designed by a nation-state to conduct highly targeted espionage. Using a modular architecture, the malware is capable of performing a wide variety of malicious functions -- including spying on users' keystrokes, documents, and spoken conversations.

Vikram Thakur, principal research manager at Symantec Security Response, told eSecurity Planet that his firm was tipped off to the existence of Flamer by Hungarian research group CrySys (Laboratory of Cryptography and System Security). As it turned out, Symantec already had the Flamer malware (known to Symantec as W32.Flamer) in their database as it had been detected using a generic anti-virus signature. "Our telemetry tracked it back at least two years," Thakur said. "We're still digging in to see if similar files existed even prior to 2010."

Dave Marcus, Director of Security Research for McAfee Labs, told eSecurity Planet that Flamer shows the characteristics of a targeted attack.

"With targeted attacks like Flamer, they are by nature not prevalent and not spreading out in the field," Marcus said. "It's not spreading like spam, it's very targeted, so we've only seen a handful of detections globally."

While the bulk of all infections are in the Middle East, Marcus noted that he has seen command-and-control activity in other areas of the world. Generally speaking, malware command and control servers are rarely located in the same geographical region where the malware outbreaks are occuring, Marcus noted.

The indications that Flamer may have escaped detection for several years is a cause for concern for many security experts.

"To me, the idea that this might have been around for some years is the most alarming aspect of the whole thing," Roger Thompson, chief emerging threats researcher at ICSA Labs, told eSecurity Planet. "The worst hack is the one you don't know about. In the fullness of time, it may turn out that this is just a honking great banking Trojan, but it's incredibly dangerous to have any malicious code running around in your system, because it's no longer your system -- it's theirs."

Complex and Scalable Code

Although it is still early days in the full analysis of Flamer, one thing is clear -– the codebase is massive.

"Flamer is the largest piece of malware that we've ever analyzed," said Symantec's Thakur. "It could take weeks if not months to actually go through the whole thing."

McAfee's Marcus noted that most of the malware he encounters is in the 1 MB to 3 MB range, whereas Flamer is 30 MB or more.

"You're literally talking about an order of complexity that is far greater than anything we have run into in a while," Marcus said.

Flamer has an architecture that implies the original design intent was to ensure modular scalability, noted Thakur: "They used a lot of different types of encryption and coding techniques and they also have a local database built in."

With its local database, Flamer could potentially store information taken from devices not connected to the Internet.

"If the worm is able to make it onto a device that is not on the Internet, it can store all the data in the database which can then be transferred to a portable device and then moved off to a command and control server at some point in the future," Thakur said.

Portions of Flamer are written in the open-source Lua programming language, which Thakur notes is interesting in that Lua is very portable and could potentially run on a mobile phone. Flamer also uses SSH for secure communications with its command-and-control infrastructure.

Thakur noted that Symantec's research team is trying to trace Flamer back to its origin, but cautioned that it will be a long analytical process. Symantec researchers will dig through all of their databases in an attempt to find any piece of evidence that may be linked to any of the threats exposed by Flamer.

"It's a very difficult job and it's not an exact science," Thakur said.

Evaluating the Enterprise Risk

While Flamer is an immense piece of malware, the risk to most enterprise organizations appears to be moderate. McAfee's Marcus stressed that chances of a U.S.-based enterprise IT shop encountering Flamer aren't all that high.

"In an attack that is as specific to a geography as Flamer looks to be, there is very little chance of this particular variant hitting a wide number of people," Marcus said.

There is however a more sinister side effect that may come as a result of the discovery of Flamer. Marcus stressed that one thing malware writers do exceptionally well is that they learn from other malware writers.

"We can expect in the future for someone to learn from Flamer and use it in a future malware variant," Marcus said.

On a positive note, security researchers for the "good guys" can also learn from Flamer to help protect enterprises and consumers from similar and future threats.

"You take the things the enemy gives you and you learn what you can," Marcus said. "That's not to say that malware is ever a good thing, but we try and learn from it."