Wednesday, December 31, 2008

It's that time of year again when we all make predictions. Here, in no particular order, are my top ten:

1. The economy will continue its descent into the *#*$% and non-specialized security jobs will continue to be squeezed. Well you certainly don't need to be Alan Greenspan to know the economy is headed down, way down (and re-reading that sentence, AG may be a bad example). Security won't be hit as badly as most industries - after all, one of the main reasons we are in this bind is that audits and controls failed. A great President once said that the rising tide lifts all boats. In 2009 the tide will be increased regulation brought on by failed financial controls and the information security boat will definitely get lifted by this. But security middle management, non-expert specialists, and anyone who does not directly contribute to the bottom line had better be polishing their resumes. And by bottom line, I mean fulfilling an immediate required organizational function - not ROI based on fluffy loss prevention. Hey, it ain't pretty but welcome to the Recession (and unlike the last time around, this time it's got a capital R).

And speaking of increased regulation, that brings us to number two-

2. There will be...increased regulation. This has been in the pipeline for a long time. Some of these regulations will be fairly specific. Sticking with tide metaphors, when the tide falls you see who has been swimming naked. Many companies will have products that they won't be able to bring into compliance overnight. And at a more glacial pace, EU regulators will move towards greater regulation and breach notification-ish laws in the coming year. More on that in this blog in 2009...

3. Cybercrime will become more criminalized. The increasingly stiff penalties for cybercrime will do more than all the web application firewalls in the world to secure our networks. Despite all the scary statistics the vendors put out, the rewards for cybercriminals are not that great. It is the relatively light punishments that make skimming a few credit card numbers a better risk/reward trade-off than holding up a 7-11. This will have major consequences for the way networks need to be secured.

Even more important than the evolution of laws is the public's perception of the law. The Sarah Palin Yahoo account incident was a turning point for public opinion.In case you were sleeping through the election cycle (or have tried to suppress it deep within your subconscious), Sarah Palin's yahoo mail account was "hacked" by a 20 year old who correctly guessed the answer to her security question (Where did you meet your future husband? Wassila High). The fact that the University of Tennessee student was actually arrested and subsequently indicted dispelled any notion that just because a door is open on the Internet you are allowed to walk in. The very wide publicity this and similar incidents have received will undoubtedly make many script kiddies think twice before attempting a port scan.

4. The so-called vulnerability circus will continue. Applications are large and complex and have vulnerabilities that smart people can find. I seem to remember seeing a quote by Marcus Ranum that things will continue to be just secure enough to be functional, and not any more. Especially with Web 2.0 (or whatever it will be called next year) this isn't going to stop. There are very few Facebook or MySpace or iPhone users who would opt for a dumbed down but locked down service. If its functionality vs. extra security, people will opt for functionality.

5. People will finally start realizing that not all crime that involves a computer is cybercrime. Just because you forked over your life savings to some "prince" in Nigeria does not make you the victim of an elaborate cybercrime. It makes you the (pretty darn naive) victim of fraud. I boldly predict that people will start to demand that the word cybercrime be reserved for crimes that were actually intrinsically tied to the use of a computer (OK, not "demand" in the sense of riots in the street, but you get what I mean). All these crazy statistics about hundreds of billions of dollars in cyberlosses are starting to wear people down. Which brings me to...

6.People will get increasingly sick of the FUD (Fear Uncertainty and Doubt). Fear sells products and writes a lot of paychecks, but barring truly shocking security breaches, peoples' attention will shift elsewhere. Getting a computer virus is not the end of the world, despite what anti-virus vendors will try to sell you. The masses have been drinking the Kool Aid for a long time, and the hangover is starting to set in. There are many risks in life completely unrelated to information security, and for the vast majority of people information security ranks low on their list of priorities. Computer security is an organizational and corporate issue, and the job of security people is to manage and take care of the problem so that everyone else can get on with the business of running their business. Security is a tax, and our job is to best manage that tax. Which now brings us to...

7. We will get better metrics for how much the security tax should cost, especially in software security. OK, this fits more in the hope than prediction category. Network security has matured to the point where there is at least a very general consensus on best practices and associated costs. Increased regulation (see #2) will drive the formation of a similar consensus in the software and application security sphere. Call it the industry maturing or the inevitable commodotization of security, but its going to come around eventually.

8. The curtain will close on the security generalist/superhero. Did any of you see Justin (I-am-a-Mac) Long in Die Hard 4? This guy could walk into a server room for the first time and 1 minute later he had hacked into a traffic light 20 miles away. The popular myth of the hacker superhero, who can break any system, is so deeply embedded in our collective minds that even we believe it as information security professionals. Penetration testers are able to advertise themselves as all round pen testers who can test any application.

The truth is much more mundane. An Oracle security expert is lucky if they really know something about MS SQL, and a Java security expert probably won't be able to help you with your .Net application. That's not to say that they know nothing about other platforms; its just that real expertise - the kind of expertise you want for a code review - is siloed. There is no such thing as one guy who know everything about everything. Renaissance Man only existed in the Renaissance, and Renaissance Computer Man only existed in the 80s and early 90s. Not in 2009.

9. Barack Obama will introduce consistency into US cybersecurity policy. OK, this too is more of a hope than an actual prediction. But wouldn't it be great if very-soon-to-be President Obama were to finally empower a cybersecurity czar to stick around for a while and bring around real change?

10. Complex stand-alone security products will continue to decline. The big vendors continue to build audit and other security capabilities into their products. A good built-in capability will always win over a standalone product, especially in tough economic times. And who wants to manage yet another vendor relationship? Niche products that address a real market need and do their job well will continue to thrive, but others will either be acquired or fade away.

And in case you are not totally convinced that these 10 predictions contain the entire security landscape for 2009, you may want to check out some other thoughts at SANS.

Tuesday, December 30, 2008

Eweek reported the other day that for the first time laptops sales have outpaced computer sales. Good news for laptop manufacturers, chiropractors, and people who steal laptops. After all, it's just not as easy to walk past the receptionist balancing a desktop on your shoulder.

More stolen laptops also means more of those pesky letters telling you that someone, somewhere might have your personal data. Data breach notification laws in 44 states now require victims of data breaches to be notified when their data has been breached. Lost or stolen laptops account for a large number of these.

There is no real evidence that these laws work to reduce identity theft (click here for a good paper from the WEIS 2008 conference) . But one thing's for sure - data breach laws are one of the leading drivers for companies to hire CISOs to keep them out of the newspapers. This is especially true in loosely regulated industries where companies would otherwise see little reason to bring a CISO on board. In the UK (which does not have a private sector data breach notification law but where government entities are required to report loss) the spate of lost laptops, memory sticks, and CDs with sensitive data has become a major political issue.

All this attention motivates some companies to focus on avoiding a public breach versus actually securing their environment. This is a fundamental difference, since most data breaches cannot be traced back to their source. If someone just took out a loan using your name and social security number, you have very little way to know how they got that information. This is the reason that lost laptops have been such a major trigger for breach notifications. A lost laptop is an obvious incontrovertible loss of data. Unlike a suspicious event in an Apache log, laptop theft is harder to sweep under the rug or not report.

Data breach notification laws almost universally exempt encrypted data, leading many organizations to mandate laptop encryption. Is all the money, time, and effort spent on encrypting laptops worth it? Are we spending valuable resources encrypting laptops when our efforts should be directed elsewhere? The data on a lost laptop is accessible to a very small number of people who in all likelihood have no interest in it. On the other hand, the database behind a poorly secured web application is accessible to the entire world. The organizational capital that a CISO expends on forcing laptop encryption is coming from somewhere else. But from a business risk perspective encrypting laptops makes sense because the cost of not encrypting (and then having to notify if a laptop is lost) is too high.

And let's give breach notification laws their due. There is a fundamental fairness about breach notification laws - when your data is somehow lost, you get to hear about it. Sunshine is the best disinfectant, and they force companies to 'fess up when they have messed up. The problem with the laws is that they are still open to an enormous amount of interpretation. The press has focussed on lost laptops while ignoring much bigger risks. When someone is on vacation and accesses a sensitive company web application from a kiosk computer in the hotel lobby, you could argue that a greater data breach has occurred than having a laptop lost on a train. The lost laptop will almost certainly be reimaged or picked up by someone who will never access the data. The hotel computer on the other hand is riddled with spyware and is an obvious target for cybercriminals. And yet the lost laptop triggers a potential data breach notification, while the hotel incident does not.

The laws are starting to catch up to this reality. A new Massachusetts law goes further than most in requiring comprehensive security policies. These new regulations should be relatively painless for organizations that have an overall security narrative in place and have dedicated sufficient resources to securing their environment. Other may belately discover that just encrypting all those newly purchased laptops is not enough.

Saturday, December 27, 2008

Security Buddha posted a while back about the lack of "builders" in the security industry. Are there too many breakers in security - people obsessed with finding the latest vulnerabilities and finding exploits? Does it make sense to build software by having a bunch of people build something as fast as possible and then have a bunch of other people try to break it? Or are breakers unnecessary if there are enough builders doing their job properly?

I got to thinking about this after reading Software Security Top 10 Surprises, a great look at software security in 9 very large software shops. The survey showed that most of these companies have a Software Security Group composed of about 1% of the total development staff (which averaged a whopping 7795 in the companies surveys). I had the opportunity to talk to David Mortman (Echelon One) and a few others about this data. One percent seems surprisingly low, but for me the biggest surprise was that these SSGs act mainly as facilitators and evangelists within their companies. They usually don't perform code reviews or penetration tests.

The emphasis on evangelizing is interesting. Some CISOs act as security evangelists for the organization as a whole. But for CISOs evangelism is only effective after deeply rooted business practices have been re-engineered to reduce the risk of data breach. An effective CISO must drill deep into the organization's DNA to understand how and where data flows from point to point. Only after detailed policies and procedures have taken root can the focus shift from analysis and enforcement to evangelism. A security evangelist in the software world will also only be effective if the underlying development lifecycle enables security.

But even with a security friendly SDLC, can secure products be built without substantial audit and review? Is empowering and motivating developers enough? In the product world, we are seeing the decline of standalone security products in favor of built-in security functionality. Will we see the same thing in software development? Will every developer become a mini-security expert, encouraged and guided by security evangelists and management types?

My gut feeling is that outside of highly regulated and very large organizations, the tension between getting things built quickly and building them securely will be too great for this approach to work. Some form of detailed review outside the development team is the only way to ensure security for anything but the lowest security applications. Developers aren't going to become security experts and vice-versa anytime soon. Especially in small companies of 10,20, or 100 developers, there is no room for evangelists. Its all hands on deck all the time. If a security code reviewer or pen tester is needed then bring 'em on. But there's no room for someone to stand there and just tell people to do a pen test.

Over the years the gap between the builders and breakers has grown bigger. Its no secret that security is not really an accepted discipline within the development world. There are very few security talks at the leading software conferences. At a recent visit to Barnes and Noble, I flipped through the index pages of a bunch of O'Reilly titles. Security was hardly mentioned in these books, if at all. There is a security shelf, and then row after row of software titles. The two don't mix, and the Java security book was on the security shelf and not on the software shelf next to the Java programming title. Search the words "secure" or "security" in any of the leading development blogs and you will get almost no hits.

It is in any case very encouraging that metrics are finally starting to emerge in this area. The OWASP Security Spending Benchmarks project will help us as an industry quantify the cost of security in building software. My guess is that there is a significant difference between companies with thousands of developers and those with a few dozen.

Friday, December 26, 2008

50 billion dollars lost to a guy with an accountant working out of a basement. Something doesn't make sense. The Madoff scandal is so stunning because we like to think that when something gets big enough it must have a minimum level of legitimacy. How can a huge well respected fund involving some of the world's wealthiest and savviest people be one big house of cards? Didn't someone bothering checking?

Details are scant, and I suspect that there will be a few surprises in the coming months. Some people funneling money to Madoff must have known the consistent returns were too good to be true. They happily collected commissions and figured that as long as there was nothing overtly criminal they had no obligation to dig deeper. But what about the real out-of-pocket victims who will probably never see any real return on their money? Is there something these victims should have or could have done differently?

In information security, we constantly deal with the same issue - how do we trust our business partners? Any decent sized organization has dozens of companies processing their sensitive data. The move to SaaS and the growing acceptance of outsourcing will see the number of partners increase in the coming years. How do companies vet their business partners? Only the largest companies can afford to actually audit their partners. How does a midsize company know whether to trust another company with its critical customer data?

Something went desperately wrong with the vetting process in the case of Bernard Madoff. There are lessons not only for the financial industry but for all risk professionals:

Charismatic charlatans can buy people over. Has always been true and will always be true. Nothing really to be learned here.

Everyone thinks someone else is doing the vetting. Every wealthy famous investor Madoff pulled in made the next salespitch that much easier. After all, wouldn't you trust your money with the person who manages Steven Spielberg's money? There is an exact parallel in the information security world. I have done security vetting on hundreds of external vendors over the years. Invariably, at some point in the conversation, I get the we-are-secure-because-big-famous-company X uses us. The implication of course, is that they have already been vetted by the bigger more famous company so there is nothing to worry about.

People are satisfied with the superficial appearance that everything is OK. A number of European banks that fell victim to Madoff had published in earlier correspondence with clients that Madoff was audited by an SEC approved accountant. Despite the absurdity of such a vast operation being audited by a single guy who was basically working out of his basement, all the investors really wanted was to be able to mark the checkbox for auditing. In information security, consumers very often are only interested in interface security. If it looks secure - if the site is SSL encrypted and there is a security policy that says something about biometric readers at the entrance to server rooms, then everything must be fine.

Too many layers of business partners is the enemy of transparency. Some of Madoff's victims invested with him directly, but many victims had invested in funds that then invested in other funds that eventually invested in Madoff. It's hard to keep an eye on your business partners, but almost impossible to keep an eye on your partner's partners. The lesson for data security? Make sure your outsourcers are not outsourcing your work without your knowledge, and if they do - make sure to apply the same scrutiny to the new partner as the first one.

The Madoff scandal, together with the equally spectacular collapse of America's leading banks, will undoubtedly lead to tighter regulation. I am guessing that there will be very wide reaching implications for information security and risk professionals. Let's stay tuned.

Wednesday, December 24, 2008

"Don't Let Cybercriminals Turn Holiday Cheers Into Jeers". I didn't come up with this cringeworthy title myself ("cheers" and "jeers" are two words that should be used sparingly, and certainly never in the same sentence). It is the title of a report by Trend Micro warning people about the cyber dangers of the holiday season. The basic message is that with all the cards and charities and shopping going on, only extreme vigilance can prevent malware and viruses and ultimately identity theft.

I agree with the first part of this conclusion. Clicking on sketchy links left-and-right will get you a computer virus. Certainly Trend Micro (one of the world's largest security companies) is in a good, if not unbiased, position to gather data on this issue. But their report seems to equate viruses with identity theft. The you-might-get-a-virus-and-your-identity-might-be-stolen mantra is so universal it no longer requires explanation.

Does getting a virus really mean that your identity might get stolen? Most users have experienced a computer virus, and yet most people have never had their identity truly stolen (identity theft is another overused term that has lost any meaning - more on that later). My hunch is that most identity theft has nothing to do with computer viruses. I don't have any data to back that up, but I also haven't seen any data to the contrary. (Most reports on identity theft point to acquaintances as the main perpetrators).

How do we measure the actual harm caused by computer viruses? Many viruses these days have a purely economic motive that does not directly impact the "victim" - how many users really care if their computer is being used for click-through fraud or to raise a spammer's google rank?

Some computer viruses are inconvenient. It can mean paying for tech support or harassing your geeky friends for help. Sometimes users are forced to re-image their computer and lose all their data. That is what users should really be worried about.

Identity theft is of course a real threat that users should also protect against. For example, owning a shredder is a good idea. For $40 you insure against the very small chance that you will suffer real identity theft from someone rummaging through your garbage. Let's say the likelihood of that happening over the five year lifespan of the shredder is 1%. Since real identity theft usually costs more than $4000 in lost time and money, a shredder can be considered a good investment.

I don't think the same argument can be applied to PC security products. The main reason to own an anti-virus product is to avoid the cost of having to rebuild or clean your computer later. It is still a good investment - I am willing to pay $50 a year to insure against the very likely probability of having to spend 5 hours rebuilding my computer.

That should be the real message of the Trend Micro report - "Don't Let Cybercriminals Turn Holiday Cheers Into New Year's Day Reimaging Your Operating System". Not very sexy, but sounds like a much better sales pitch to me.

One last thing - I tried to read the link "For tips on online safety during this holiday season". Every time I clicked the pdf link, my browser crashed. Should I try saving the file to my desktop? My guess is that one of the tips would tell me not to.

Tuesday, December 23, 2008

Security management these days involves tough choices. Any resources you devote to mitigating threat A need to be taken away from threat B. If you devote 100 hours of your team's time to hardening server configurations, that's 100 hours you didn't spend on some other task.

How do we make those choices? Is it intuition? What we hear going on the in the industry? Best practices? As a society as a whole, we are very bad at making those choices. Bruce Schneier's recent post on the disproportionate amount of attention paid to peanut allergies illustrates this point. Is the same misappropriation of resources at play in IT management?

There is a severe lack of data on information security budgeting and resource allocation. Entire industries have mushroomed around particular threats - anti-virus, hacking, firewalls, you name it. But which solution offers the most threat reduction per dollar and man hour spent? As an industry I think we are going to start to see this question addressed with increasing frequency (assuming the industry does not disappear first).

Two weeks ago the SC World Congress was held in the Javits Center in New York City. If you have never had the pleasure, the Javits Center is on the far west side of Midtown Manhattan. A very convenient location except for the nasty 10 minute walk from Penn Station.

There were some good speakers at this event, although attendance may have been negatively affected by the current economic crisis. Although I only managed to stick around for a couple of the talks, there were some interesting speakers and the more intimate setting made for some great networking opportunities.

One interesting talk I attended was a moderated discussion with Adrian Seccombe and Paul Simmonds, two of the founders of the Jericho Forum. The Jericho Forum is an aggressive and eloquent proponent of de-perimeterization. If you are wondering what de-perimeterization means, you are undoubtedly not alone. I will post on this in the future, but basically this philosophy can be summarized by: firewalls bad, open networks good, security-by-network-perimeter is doomed to failure. Presumably their name refers to the fact that the walls of Jericho couldn't save the city (obscure Biblical reference in case you were wondering). But then shouldn't they be named after a city without walls that successfully withstood attack? And do such cities exist? Hmmm...

Back to the SC Congress - it was also interesting to hear from Louis Freeh, former Director of the FBI. He is a good speaker with some great anecdotes from what has obviously been a very interesting career (you don't get to be head of the FBI by sitting behind a desk). All in all a great choice for a keynote address, although I wish he had given us more of his thoughts on the recently released report from the Commission on Cyber Security for the 44th Presidency.

One added bonus from the event - they were setting up for the Boat Show at the Javits Center and I got to see some gorgeous yachts up close and personal.

How much should you spend on security in the development process? I have googled this question to death and keep getting the same answers:

Security should be built into the software development cycle

You need to devote sufficient resources to security

You need to make sure to consider security at the design phase

Great...and that means 5%? 10%? 20%? Or is it impossible to measure?

I started the OWASP Security Spending Benchmarks Project with Jeremiah Grossman (CTO at WhiteHat Security) to start getting some data on this topic. For industry A dealing with data B, we as information security professionals should be able to come up with a range of how much money should be spent on security.

Jeremiah has done a great job outlining the reasons people spend on security. But budgets are tight and there aren't any bailouts for software companies (yet). If companies can get away with spending less than necessary on software security they will. The only way to change that will be through industry benchmarks. Please get in touch if you want to get information on how to participate in the OWASP survey!

Dr. Boaz Gelbord is the CISO at an innovative New York-based company and the founder of Security Scoreboard. Views expressed in this blog are his own.

Boaz has been a leader in the information security field for ten years. Boaz began his information security career at KPN Royal Dutch Telecom, where he led numerous security projects and authored 12 patents relating to information security. His work on privacy enhancing technologies at KPN earned several international awards and led to his designation as one of "Europe's Tech Stars" by the Wall Street Journal. Boaz was appointed as an independent expert to the eEurope 2005 Advisory Group, a high level committee that advised the European Commission on Internet policy. He was also an appointed expert in several ETSI (European Telecommunications Standards Institute) Specialist Task Forces, including the Secure Algorithm Group of Experts that standardized the GSM and UMTS encryption algorithms. Boaz taught information security for several years as an Associate Professor at the University of Leiden in the Netherlands.

Boaz was one of the founders of the European Network and Information Security Agency (ENISA), the official EU body responsible for information security where he headed the Security Technologies Unit. Boaz has chaired steering committees of several leading information security conferences such as ISSE (Information Security Solutions Europe) and has served on numerous program committees.

Boaz was also the first Director of Information Security at the New School in New York City where he introduced and implemented a comprehensive information security program. He holds a BSc in mathematics from the University of Calgary, an MSc in mathematics from the University of Toronto, and a PhD in mathematics from the Technion in Israel.

Boaz holds the CISA and CISSP certifications and is a member of ISACA, ISSA, and the IISP. He is a frequently invited keynote speaker on information security and privacy issues. In the past he has presented keynote addresses at ITU, CEN, ETSI, and East West Institute conferences.

Originally from Calgary, Alberta, Boaz speaks English, Hebrew, Dutch, French and Spanish in decreasing order of fluency.