It has no teeth. It is just more policy with no accountability or meaningful penalties for non-compliance

It consists of paper audits -- more of the same useless audits

The auditors would not be CyberSecurity experts. This last one is insane.

This nation's critical infrastructure (power grid, water supply, oil & gas refineries, etc.) are run and managed by IT systems and software applications. These systems and applications were not built with security in mind and can only be tested and measured by IT security tools in the hands of experts. Beyond our critical infrastructure, we also have thousands of IT systems and software applications managing sensitive data -- military secrets, privacy information, our wired and wireless communication systems, and more. Many of these systems are built and managed by large government system integrators.

Until we have IT-based policy, coupled with IT-based controls, automated monitoring, and real penalties for non-compliance (which means financial) we will continue to fail when it comes to CyberSecurity protection. And we are failing, make no mistake about that. 2011 had more publicly-reported data breaches than any year prior. Having spent 10 years working for various government agencies before moving to the private sector, I can tell you that the only difference between 2011 and prior years is the "public" part of those breaches -- they've been happening for years to government agencies, systems integrators, and the private sector, but most were not reported publicly.

Representative Jim Langevin of Rhode Island introduced a cybersecurity bill to Congress last March. There are four major features I like about this bill:

It would give DHS the authority to compel private firms deemed part of the critical infrastructure to comply with federal security standards

The standards are based on the recommendations of cyber experts with first hand knowledge of the reality of the challenges facing each industry

It carries financial penalties for sub-standard audit results. This includes ALL organizations in-scope, whether they are federal agencies, systems integrators, or private sector. If you're part of what is deemed "critical infrastructure" you must comply

And, most importantly, imo, the mandated audits include IT security products that will test and monitor the systems and applications for security holes

Unfortunately for Rep. Langevin's bill, lobbying and political pressures have stalled it -- probably because it includes measurable accountability and, for the first time in our history, insightful, practical policy for CyberSecuity defense.

06/17/2011

As the uptick in breaches continue to dominate headlines and increase the general paranoia around what might happen, there’s often a story lost in the shuffle. It seldom seems like there’s a bulletproof method to stop the invasive tactics of today’s hackers. That’s because really, there isn’t.

You could spend all day trying to determine the multitude of issues that could lead to any sort of data breach or exploit on a specific system. Or, you can throw a technology solution at it – some type of IPS/IDS, a DLP or just try to leverage as many vulnerability scanners as possible.

The bottom line is that breaches will continue happen because criminals profit significantly from being able to sell the coveted sensitive information they set out to steal. However, on the flip side, we’ve seen advanced attacks like Stuxnet designed specifically to infiltrate SCADA-type systems (really a gigantic piece of dangerous malware). Therefore, there isn’t a one-size-fits-all approach to every single exploit.

While costs are a major factor, the security of critical systems can often trump the financial costs associated with breaches. The approach that so many organizations often take has been reactive in nature, and usually results in the decision to deploy a tool – a scanner, a DLP, an IPS, a firewall, etc. Underestimating the importance of deploying securely developed and configured applications in a production environment is perhaps one of the bigger oversights we are seeing.

For example, if a code review on a business application identified some malicious code or even a configuration issue, it is like finding a needle in a haystack – a process many companies don’t want to adopt, especially if they feel there’s an automated tool they can quickly integrate that is supposed to find every vulnerability. That needle could represent a major vulnerability that could have been avoided had that application been developed in the most secure environment before being rolled out into production.

Don’t Play the Blame Game

When it comes to a breach or an exploit, it’s not about who did it or why they did it. It’s about prevention and identifying how it happened so it doesn’t happen again. In the case of the Sony Playstation breach, the company shifted from a closed, embedded systems provider to a Web and Internet services content provider.

The flaw was the team was not properly educated on the differences. This also led to a failure to see how the attack surface had expanded and in what ways the gaming applications were exposed. The right amount of team education through a specific training curriculum could have been a cost-effective and highly efficient way to avoid a breach like this.

It’s easy to throw another firewall or a DLP solution at problem. It’s a reactive measure that is designed to satisfy some immediate needs, most times, after a breach. However, this does nothing to prevent a breach from occurring. Furthermore, it doesn’t map to a defense-in-depth strategy which should include:

Training all technical personnel on the principles, both fundamental and advanced, on secure software application development.

Ensuring that all developers have some type of development bible or reference guide where they can leverage knowledge that will help them ultimately write secure code.

The right mix of people, process and technology – again, you don’t need every security solution on the planet to be secure, and you need employees to adhere to secure practices in their respective roles.

An effective means of assessment – identifying gaps in the SDLC and understanding where vulnerabilities exist and remediating so they aren’t an ongoing issue.

A proactive security program will invest the time it takes to ensure that applications are developed and configured securely prior to being put into production. This proactive model requires an overhaul of priorities for what an organization’s developers are working on, which means training personnel, consistently testing software applications and having a guidance system that provides a knowledgebase of vulnerabilities.

And oh by the way, the training of developers, designers, architects and even project managers isn’t just something you should do – its mandated by industry regulations like PCI DSS, HIPAA, NIST and many others. (So you have to do it, depending on which regulations are relevant to your business.) Again, it may take some digging in that bale of hay, but when you find that ‘needle,’ the effort will pay off.

04/08/2011

This past week has yielded a veritable treasure trove of head-shaking security stories, all related to my favorite security soft spot – people. The shimmer from our technological advances blinds us from the damage people can do – and we remain so easily fooled:

Wired reported that Albert Gonzalez, the record-setting hacker of Heartland Payment Systems, TJX and a range of other companies said the Secret Service (SS) asked him to do it. The government admitted using Gonzalez to gather intelligence and help them seek out international cyber criminals but says they didn’t ask him to commit any crimes. Uh, yeah… ok.

Storefront Backtalk and others reported on a Gucci engineer who was fired for "abusing his employee discount," but then really got even (and then some) by creating a fictitious employee account (with admin rights!) and then using that account to delete a series of virtual servers, shut down a storage area network (SAN), and delete a bunch of corporate mailboxes… allegedly.

TechAmerica wrote about HP suing a former executive who took a job at Oracle. Apparently, he downloaded and stole hundreds of files and thousands of emails containing trade secrets before quitting.

You might ask, “How can a company so advanced and large as HP not have protections on their digital trade secrets?” It’s not like DLP (data leak prevention) solutions don’t exist. And how about Gucci? I guess this is a double whammy around policy and people, who are so often intertwined. There isn’t a policy flag or checkpoint in place to verify that this newly-created employee was authorized with such privileges that he could delete entire virtual servers and mailboxes? Nobody bothered to check that this was a legitimate employee? Worst of all, this non-existent employee’s accounts were created by a fired network engineer! And then there’s Mr. Gonzalez (hacking community) and the SS (intel community) – which group do you trust less to be honest with the public? Both communities have for a long time engaged ethnically-questionable people to do their bidding. If it’s true that the SS hired him to hack, shame on him for not getting protection for himself in advance. You have to wonder what else he hacked into to merit an actual arrest.

And here we are in 2011, putting our lives on display with Facebook, Twitter, LinkedIn, Yammer, et al, broadcasting our whereabouts on vacation (or more specifically, that we’re not home for an extended period,) meeting up with strangers who have similar tastes, and making our personal details and history available for anyone to view. It’s not always technology that will get us into security trouble… it’s the people.

03/15/2011

Recently, Ellen Messmer wrote a story on a Cyber Security early warning system in the state of Washington, USA. One of the most promising pieces of this system is the process and information sharing that’s being folded into it. Washington University, Starbucks, City of Seattle, Amazon.com, Port of Tacoma, and other groups are setting up an information sharing system that will help one learn from the other. For example, if Amazon.com experiences a botnet attack, it will share that profile and info about that attack with the city of Seattle so it can learn, prepare, and hopefully defend itself against a similar (or the very same) attack. The system, called PRISEM (Public Regional Information Security Event Management) is designed to offer an online early warning to all it members. This system has several security analogies in place today:

The tsunami early warning system put in place after the disastrous Indian Ocean tsunami in December 2004

The Las Vegas cheater profiling system which shares behavior, personal, and photographic info of known scammers amongst numerous casinos

The information sharing strategy of ODNI (Office of the Director of National Intelligence) in America, which began operations in April, 2005 after the need to share information between the intel communities became painfully clear in the aftermath of the 9/11 attacks.

So praises all around for PRISEM and the Washington organizations committed to sharing security information. Unfortunately, the system they’re putting in place will not detect or prevent the most nasty and common attacks that occur – those at the software application layer. PRISEM talks about the importance of protecting SCADA system and other critical infrastructure; I couldn’t agree more. However, standing up a Security Information Event Monitoring (SIEM) and information sharing system isn’t enough. The majority of application layer attacks will still be successful … and this will be the case until those software systems are either updated to modern secure coding standards, or protected with application layer defenses (similar to web application firewalls for web apps.) As an industry, we’ve still got some innovation to create in the form of self-defending application system. The concepts are in place and this approach would be a lot less expensive than re-architecting and re-coding the thousands of legacy applications that support our critical infrastructure.

02/25/2011

At the RSA Conference last week, I had the chance to sit down with 4 executive leaders from (ISC)2. During that meeting, I was educated to the fact that, according to (ISC)2 data, the CSSLP (Certified Secure Software Lifecycle Professional) certification is being adopted at an even quicker pace than the wildly successful and pervasive CISSP was at the same point in its lifecycle. The (ISC)2 informed me that CISSP took “more than a decade” to reach the critical mass where it started to be universally recognized and adopted as an industry-wide standard.

CSSLP still suffers from a severe awareness challenge (I mentioned it to one of the most prominent security analysts in the industry and his response was, “What is that? Never heard of it.”); however, its objective, according to its shepherds, is to provide a baseline certification for software security professionals. I think it has a decent chance of succeeding given that objective. I’ve worked in the software industry for nearly 20 years and one thing I learned early on is that not all software professionals are the same – in fact, there are stark differences in skill set, responsibility, and domain knowledge needed between the various roles that contribute to software development and deployment, including but not limited to: Architect, Developer, QA/QE, Business Analyst, etc. A certification program that has contextual significance to each of these roles specifically would be a welcome change.

In similar conversations with the Microsoft SDL team and OWASP Leaders, there seems to be universal agreement that a role-based certification for software security would be a good thing. Sure, there are differences of opinion in terms of explicit endorsement/sponsorship as well as how to measure and set an acceptable quality bar for each certification/qualification. But those are interesting problems to solve and will contribute to furthering the software “development” profession (development here encompasses all of the roles mentioned above.)

Perhaps most promising of all, is that the conversations between myself (and other leaders at Security Innovation), and the executives responsible for (ISC)2, Microsoft SDL, and OWASP have been productive. Further, this is a path forward that didn’t seem to exist a mere 3 months ago. Where the path will take us is to be seen; but watch this space – it will prove to be interesting as 2011 evolves.

We recommend that if you use Facebook you follow the instructions in the blog to set the option to turn it on when it becomes available to you. The reason we are particularly gratified is that the Firesheep tool our consultants played a part in putting together was one of the reasons the project got the attention it deserved, according to a recent article in SC Magazine.

This was exactly why Firesheep was created -- to bring attention to an issue that was well known by security professionals, but not more generally known by consumers of web commerce and social media content. We should not forget, many other sites still have the same problem. Before you use a site or application that contains personal information, be sure your entire session is encrypted if the option exists.

What is particularly illustrative about this case is the amount of time it took for Facebook to get to the point of announcing it, and it is still not rolled out. Firesheep was made available over four months ago, and Facebook said at the time they were already looking at the issue. If a company with the resources and visibility of Facebook can have its most high profile page hacked and not deal with one of the most basic of security issues for months, what chance does everybody else have?

With some education, improvements in application development lifecycle processes, and the right informational tools, you can improve those chances greatly.

This case illustrates what we at Security Innovation do every day. The cool hacks and attack techniques might get the attention, but it’s the detailed technical work that needs to be done by application developers as part of their day by day responsibilities that is where the real improvements in security are going to come from.

By working with experts in the field and using the learning that’s available, this work does not have to increase the cost or time it takes to develop applications. Fixing it after the fact will definitely cost. The team at Facebook just did some of that work. If you don’t have the development resources of Facebook (and who does?), we can help you do the same.

12/21/2010

Last week, I grumbled over the fact that a student can graduate with a Computer Science and Software Engineering degree and had zero exposure to software security . Huh? Doesn’t our society and business literally run on software?

We have plenty of decent (not fantastic, but decent) standards and guidelines on which to build a certification program that can plug the application security hole at Universities; in fact, most of the standards (from NIST, DHS, MS SDL, OWASP, etc.) could lend themselves to several, based on either role or “project” (to borrow the OWASP lingo.)

We need to accept that (ISC)^2’s attempt at replicating CISSP for software with CSSLP is a failure. The test is a joke and the training/prep content overlaps with CISSP to a frightening level, watering down the value of CSSLP and, frankly, endangering the sanctity of CISSP in the process.

We need an organization to step up and sponsor an AppSec Cert program. Successful certification programs need three critical elements:

A sponsor that has market reach/penetration;

A body of content against which to construct training and exams; and,

The infrastructure from which to deliver and support the program.

OWASP could sponsor this. So could Microsoft or IBM – they own the lion’s share of the software development market (and don’t they each have a bunch of cert programs already?). An independent org like (ISC)^2 won’t be successful without a sponsor, imo. Finally, the infrastructure for such a program is key. What the PCI Security Standards Council has done with their QSA and PA-QSA audit certifications is a great model – and now they are moving the programs to eLearning for scale and efficiency. Amazingly, this group has got it right and is a model for us to follow in the AppSec world at large.

Who will be the organization to take a risk (albeit small) to sponsor a program that could have global impact on the single biggest problem area facing IT Security – the software level?

12/17/2010

For years everyone from Mary Ann Davidson (CSO or Oracle) to OWASP to DHS (in their “Build Security In” initiative with SEI) have been bemoaning the fact that our universities do not adequately train software engineering and computer science students on secure coding practices (and in most cases not at all.) Even I have written and presented on the topic, calling for better training and awareness and complaining that industry shouldn’t have to bear the burden of educating software engineers on security. Well, I was wrong.

There are a few universities who are now starting to include security courses in their degree or certificate programs; however, that will take a very long time to propagate throughout industry and the penetration of such courses is still very small. Mary Ann Davidson even offered to give preferential hiring treatment at Oracle to schools who demonstrated security training as part of their Computer Science and Software Engineer programs -- and got a pathetically weak response. Go figure.

I have never been a big fan of personal certifications; however, it is a model that works. Individuals like to own them, and companies like to hire employees who possess them. Cisco’s certifications for networking professionals and the now seemingly omnipresent CISSP ensures at least a minimum level of expertise in security disciplines. However, we still lack a practical and meaningful certification for anything related to application security.

Let me chew on this on a bit and get back to you when I have more concrete thoughts on remediation.