Dec 22, 2010

While many information assets are intangible (e.g. digital data and knowledge), some have a physical medium, existence or expression that is vulnerable to a variety of physical risks. Furthermore, many of the controls over intangible information, particularly the IT systems that store, communicate and process most of it, have physical security aspects.

The security awareness materials for January focus on the physical protection of physical information assets against various physical risks.

Proprietary knowledge in the heads of key workers means they really are “our greatest assets”, so health and safety measures are relevant to information security. We also provide a briefing on hardware hacking to catch the imagination of your IT people. This topic proved unusally hard to research - I can only presume that hardware hackers are a rather shy and retiring bunch.

Find out more about the new physical security awareness materials available in January’s 100Mb NoticeBored module and, if you are not already a customer, do get in touch. We’d love to help take your information security program to a higher level in 2011. It’s time to up your game.

Dec 7, 2010

In his book No Tech Hacking, author Johnny Long ably describes some simple non-technical attack methods mostly involving social engineering and physical site intrusion, but it is a shame he doesn’t present a more compelling call-to-action. Readers can and indeed should be more aware of, and ideally resistant to, the methods described. The book presents the basic information but doesn’t really motivate readers to respond, leaving us rather flat.

Dec 4, 2010

Serious business disruption stemming from an IT incident at National Australia Bank (NAB) on the night of November 24th led to serious questions being posed in the press about the bank's governance and even its HR practices. This was clearly a costly incident for the bank, creating a flurry of adverse customer and media commentary (such as "FURIOUS consumers are demanding compensation after a NAB computer bungle delayed millions of wages, pensions, family payments and business transactions across Australia. Tens of thousands of anxious people could still be without cash for the weekend because of backlogs from the shambles.") and hence brand damage, in addition to the direct costs of investigating and resolving the incident itself and compensating customers.

Now that the dust is settling, let's review the business continuity aspects of the case, based on media reports, public statements by NAB and a little idle speculation.

The actual IT incident, originally termed "technical issues", was subsequently blamed on human error: it seems an IT professional loading an incorrect or corrupted parameter file to the mainframe led to errors on the overnight batch run and so delayed the release of transaction history files for other banks to reconcile their accounts with NAB.

The fact is that these things do happen. Despite all the strong resilience measures in place to prevent (detect and block) errors, to train staff and to build various checks and balances into the IT systems, they don't always work perfectly. Sometimes we simply run out of luck. All organizations obviously try to prevent incidents but wise ones prepare themselves for the worst with fallback plans.

It looks as if either the original error, or the actions done to resolve it, or something entirely separate may have led to some direct debits and payments being withheld and others being duplicated and held up ATM and EFTPOS services, further complicating the situation. This may point to problems in the incident management, business resumption and/or the IT disaster recovery activities, or simply more bad luck. As outsiders, we may never know as the details are presumably considered proprietary information and may be embarrassing if disclosed.

While some commentators are implying that the mainframe technologies behind most major banking operations being '30 to 40 years old' means they are outdated and should be replaced ("the complexity of the software required is greatly exacerbated by age" said Mark Toomey), this is not unlike the story of an office cleaner praising the longevity of his broom. "I've had this broom over twenty years now. It's had sixteen new handles and two dozen new heads, but it just keeps on going!". Today's mainframe systems are patently not the same as those running last year, let alone in the 70's. Both hardware and software are updated from time to time, such is the nature of technology.

NAB's public response to the incident and its handling of the news media have been praised. The CEO openly acknowledged the problems and promised that the bank would compensate customers who were out of pocket as a result. Behind the scenes, it is fair to assume that the incident was properly escalated to management, a response capability was in place, and the Public Relations aspects of the response (at least) were effective - for instance channelling information through the PR office and closely managing disclosures to minimize brand damage, and lining up branch staff to handle irate customers even over the weekend. This all points to decent contingency planning, since the exact nature of an incident that would require a media response cannot be predicted.

NAB PR spokesman George Wright said "This has obviously been a very significant inconvenience and in some cases distressing situation for a lot of people. We are not shying away from that at all. All I would say is that we have not had an incident of this nature before. We do not anticipate having one again." George is of course trying to imply that the bank will do whatever it needs to do to prevent 'incidents of this nature' but I suspect they do anticipate other incidents in the sense of having recovery and contingency plans to deal with them, if they should occur.

As a final comment, I find it interesting that NAB takes the opportunity of a customer update on the incident to remind customers of the threat of phishing: "Important: NAB will never ask you to disclose your password. Don't respond to any emails or telephone calls requesting personal information, even if they appear to have come from NAB. NAB will never ask you to disclose your details in this way." This is a sensible move since phishers are likely to exploit 'incidents of this nature' in order to scam their victims, for example emailing fake compensation offers. That's something that all organizations should probably weave into their own business continuity plans.

Regards,
Gary

Disclosure: I worked for a NAB subsidiary a few years ago but have no inside knowledge on this incident.

Business continuity in the title of December's security awareness module from NoticeBored refers to the central purpose of various forms of resilience, disaster recovery/business resumption and contingency planning: these are not purely academic approaches but serve to support the business in a very practical way, in times of crisis.

Making processes and systems resilient is an ideal approach to business continuity management if the organization can shrug-off incidents that might otherwise interfere with or stop vital business activities, keeping operations running without a noticeable break. Disaster recovery and business resumption planning, however, start with the assumption that the business has unfortunately been disrupted as a result of a disaster, for instance if the resilience measures turn out to be inadequate in practice. Contingency planning takes that line of thinking a step further, preparing the organization to cope with situations that are completely unanticipated.

Nov 22, 2010

Here's a type of policy I've not seen before, appended to the specification sheet for an electronic component made by National:

LIFE SUPPORT POLICY

NATIONAL’S PRODUCTS ARE NOT AUTHORIZED FOR USE AS CRITICAL COMPONENTS IN LIFE SUPPORT DEVICES OR SYSTEMS WITHOUT THE EXPRESS WRITTEN APPROVAL OF THE PRESIDENT AND GENERAL COUNSEL OF NATIONAL SEMICONDUCTOR CORPORATION. As used herein:
1. Life support devices or systems are devices or systems which, (a) are intended for surgical implant into the body, or (b) support or sustain life, and whose failure to perform when properly used in accordance with instructions for use provided in the labeling, can be reasonably expected to result in a significant injury to the user.
2. A critical component is any component of a life support device or system whose failure to perform can be reasonably expected to cause the failure of the life support device or system, or to affect its safety or effectiveness.

I'm looking forward to reading my first "Critical business support policy" on the specification for a bit of IT ...

Nov 16, 2010

If you are keen to learn about security metrics and perhaps even design or at least refine your own information security measurement system, I recommend Krag Brotby's thought-provoking book Information Security Management Metrics: A Definitive Guide to Effective Security Monitoring and Measurement. Managing information security properly demands the use of suitable metrics at all levels from defining security strategy and governance, through prioritizing resources and investing in security, down to decision support for a million day-to-day operational security management decisions. Krag's book won't give you a checklist of things to measure, but it will lay the groundwork and set you up to define your own metrics shortlist.

If you are using the ISO27k standards and plan to adopt the metrics advice in ISO/IEC 27004, make the time to read Krag's book before you dive right in at the deep end.

Nov 11, 2010

A neighbour called me yesterday about a suspicious phone call she received from someone claiming that she had a problem with her PC. The caller, who apparently sounded Indian, asked her to switch on her PC so he could help her sort it out. Thanksfully she had the awareness to notice something amiss. The caller mumbled who he was working for and wouldn't clarify. When she told him she needed to verify his identity, he terminated the call .... and presumably went on to try to scam some less-savvy sucker.

"Scamwatch continues to receive a steady stream of reports from consumers about out-of-the-blue phone calls from scammers wanting remote access to your computer to 'get rid of viruses' or to 'fix' your computer ... The calls, which appear to be originating overseas, ask consumers for remote access to their PC to 'see if their computer is infected'. The scammer claims to be from an IT support helpdesk, or some have even claimed to be from Microsoft. If you give remote access, the scammer may go on to plant malware on the computer; or go on to offer to fix the computer for a fee – paid by credit card over the phone."

The scammers are presumably using Skype or some other free VOIP service to both conceal their origins and cut their costs, hence it's known as "Vishing" - phishing by voice calls.

The scammer knew my neighbour's name and phone number - not exactly hard to find as she is listed in the phone book, but that little piece of information was nearly enough to catch her out ("How come he knew my name?" she asked me!).

Make sure your friends, family know how prevalent these social engineering attacks are. Forewarned is definitely forearmed.

Nov 6, 2010

News that Google employees are to be treated to a security awareness program from next month is a positive move, but makes me wonder exactly what the sleepy giant has been doing up til now. Google has been rightly criticized for several privacy breaches already, with the WiFi data captured by Google's snooper vans a recent example that is still rumbling on. The news piece on TechShout implies that Google's awareness program will cover privacy, but makes no mention of the myriad other information security issues. Personally, given Google's lackluster response to the privacy complaints, I get the distinct feeling that Google's management just don't get security. I just hope Google's awareness program has messages for management as well as the workforce.

Oct 28, 2010

The thumbnail above shows the first of a series of 6 posters in November's NoticeBored security awareness module on social engineering. It's a particularly important topic for us because security awareness is by far the most important control against social engineering. Alert employees who appreciate the threat and know what to do if they feel they are being targeted stand a much better chance of resisting attacks than those who remain blissfully unaware throughout.

As always, the NoticeBored newsletter sets the scene for the topic and outlines the risks associated with exploiting people rather than technologies.

The social engineering capture-the-flag competition at this year's DefCon hacker conference was a real eye-opener for many: we couldn't help but notice a number of prominent organizations hastily sending out warning notices to their employees ahead of the CTF competition, even though the rules of the game were strictly limited to keep the event ethical and educational. What's more, not all the competitors were experienced social engineers - many were beginners - yet ALL of the targets were successfully compromised. If management feels so worried about a mere game, how come they seem to be ignoring the real-world social engineering attacks from accomplished and determined social engineers who don't care about rules? How bizarre!

If social engineering is of concern to your organization, please get in touch whether you would like to purchase just this awareness module, or perhaps take out a full subscription to the monthly NoticeBored service. We'd love to help you deliver a best-in-class security awareness program.

I wrote the following piece in response to a request for input by David Lacey on his blog. David and other luminaries in ISSA-UK had a meeting to discuss what they feel are the biggest security challenges we'll face in the decade ahead. An ISSA White Paper is planned at the end of this year, so it would be good for the wider infosec community to collaborate on this.

I composed the following as a reply to David's blog but for some reason the ComputerWeekly site refuses to accept it. Perhaps it's too long or goes against their editorial principles, who knows? Anyway, here's what I wrote ...

FWIW my main concern for the decade ahead is the increasing power and resourcing of the black hat community - not so much the lone home hackers and hacker clubs (who are formidable but rather fragmented and from what I've seen relatively benign, well-meaning even in some cases) but the true criminal community that increasingly uses hacking and social engineering to harvest the real gold out there: the major corporates with lax security, negligible security monitoring and mostly not a clue that they might even be in the gunsights. There's a positive feedback loop at play: as the black hats successfully exploit small targets and get away with it, so they build up their resources (knowledge and cash) to invest in attacking bigger targets with more advanced weaponry. They can afford the R&D. We can't.

At the same time, the white hat community has basically stalled. So long as you and others continue to press the line that legal/regulatory compliance is the most effective way to make corporates become more secure, we're on a hiding to nothing as far as I'm concerned. Compliance achieves the least amount required, and that under sufferance. It's hardly destined to show senior management The Light, namely that strong security makes good business sense, enables them to do more stuff safely, protects their most important and valuable corporate assets, and gives them a substantial commercial advantage over their insecure peers who are 'accidents waiting to happen'. Security-for-compliance is just a nasty, inconvenient and distracting annonyance, a cost of doing business. It's a bit like 'tidying the place up because the auditors are coming' as if a tidy office will distract them from seeing the fundamental flaws all around them.

The black hats love compliance, so long as that means they can safely assume their targets will have made the least possible effort to meet the bare minimum standards to the letter, while largely ignoring all the supporting things (such as the human factors - security awareness, competence, training, qualifications, procedures and all the other good stuff in your book!) that are actually required to become secure. If those are not mandated, they evidently don't matter so they aren't being done. That's the dark underbelly of compliance.

As an ardent fan of ISO27k, I'm dismayed, not to say horrified at the general lack of uptake of the ISMS approach. With just a few thousand organizations certified to ISO/IEC 27001 so far, and a few tens or hundreds of thousands more using the standards without being formally certified, this is barely scratching the surface of the millions of organizations Out There and all those accidents-in-waiting. Most managements will spend as little as they possibly can for PCI-DSS or privacy/data protection compliance, but won't take the next bold step of consolidating all those point solutions into a coherent information security management system, and working to fill the gaps. One of these days, they will run out of fingers to plug the holes in the dam.

Oct 21, 2010

"There are user IDs and passwords to remember everywhere you turn. There are codes and passwords for a variety of Web sites, bank accounts, frequent traveler programs and voicemail systems. It's tough to keep track of them all! Our Password Directory can help. It's alphabetically organized to log the user name, password or a password hint for any number of applications. It's a thoughtful gift for the busy, well-connected friends on your holiday list."

Unbelievable! Well, actually it's entirely credible. Worryingly, there probably is a market for products like this, at least among the clueless buying for the security unaware.

I'm puzzled as to the evident lack of general interest in or uptake of secure 'password vault' programs which neatly solve the most awkward and annoying aspects of the password issue. Not only do password vaults store passwords securely (the best using strong encryption such as AES, and insisting on a good user password to generate the encryption key needed to unlock the vault) and recall them automatically when the user returns to a password-protected web page, they also offer to generate ridiculously long, complex passwords for those enlightened websites that don't hamstring the user with stupid rules such as 8 characters maximum. Speaking personally, there's NO WAY I could remember my current crop of ~150 strong passwords without a vault. To be honest, I'd struggle to recall even a handful, forcing me to either use short/weak passwords, or to re-use a few passwords on multiple sites, both of which significantly weaken the value of passwords as authentication mechanisms.

It's not as if there's any shortage of password vault programs Out There ... but before you choose one, just remember that you are entrusting it with the keys to your virtual identity, so be extremely careful to check out the supplier's trustworthiness and product's security.

And if that's too hard for you, perhaps that Password Directory is just up your street.

Oct 13, 2010

Don’t drive a wedge between compliance and security. Whatever your stance on the “compliance vs. security” debate, hopefully we can all agree that intentionally keeping them apart doesn’t make sense from either a compliance or a security perspective. Why force a false dichotomy between two concepts that should, in theory, be in alignment? After all, they both have the goal of protecting data. Sure, maybe you’ll need to do some things for compliance that you wouldn’t do for security (based on risk assessment or tolerance) or vice versa, but it’s hardly an either-or situation across the board. The overall direction of managing compliance should be in line with the security strategy. Is your compliance management team the same as your security management team? If not, is there a concerted effort to collaborate when and where possible or do both sides govern their own private islands with no trade routes between them? If the latter situation is truer of your organization, perhaps you should ask why and whether it’s best for it to remain that way.

I guess one reason why an organization might want to keep [security] compliance and [information] security totally separate is essentially the same argument that separates Audit from the operational business: independence helps the function see things for what they truly are. In this structure, the compliance function is therefore operating like audit, assessing compliance and, presumably, persuading information security, IT or other functions to do whatever needs to be done to achieve compliance, rather than doing those things itself. It's not much of a stretch, then, to make this kind of compliance function a part of audit.

Another legitimate reason for separating the two is that compliance issues are far broader in scope than merely [information] security. Organizations are for instance obliged to comply with health and safety, tax and human resources legislation, plus all manner of commercial/contractual obligations and industry regulations that fall well outside the sphere of information security, as well as those that span the boundary or fall entirely within it.

Anyway, that said, I agree with the thrust of Verizon's recommendation that close collaboration between compliance and security functions, if not full integration, is important. I would add that close collaboration is equally important with numerous other functions, such as IT, physical/site security, risk management, HR and in fact "the business", meaning the organization's profit centers. It's hard to imagine how they could work productively otherwise. In fact, this whole issue might be merely a blinkered view of the formal stovepiped organization chart, whereas in reality the informal network of colleagues, peers, influencers and decision makers implies working relationships between a wide variety of people having some level of professional interest in information security, and myriad other things. Contrary to the org chart's satic, flat appearance, organizations are in reality fluid, multidimensional systems.

Regards, Gary

PS Being picky, I might challenge Verizon's assertion that protecting data is a goal of compliance. It seems to me that compliance aims to ensure that the organization fulfills its obligations, which is essentially a matter of risk management (do the projected benefits of compliance outweigh the projected costs of noncompliance?). To what extent that includes data protection depends on the obligations, not on the compliance function's mission statement.

Wired.com is reporting that the Lower Merion school district found guilty of invading its students' privacy by spying on them through webcameras installed in the school-issued MacBook laptops, has to pay $610,000 to settle lawsuits brought by two students.

The school district claims not to have been deliberately spying on students in a non-specific way (a 'dragnet' operation). However, the fact that a secret photo was used by the school as evidence to discipline a student indicates that, at the very least, it was deliberately and consciously using the software to snoop on the student concerned.

Snooping facilities of this nature are normally intended to obtain evidence and so help recover stolen computers. This begs questions about whether such evidence might open the door to privacy complaints by those accused of stealing or using stolen computers.

Furthermore, this case potentially has implications for other situations in which an organization, or indeed an individual, provides someone with IT gizmos and facilities such as email, phones, PCs, PDAs etc. that enable them to snoop on users. Targeted, covert observation may be permitted for certain law enforcement and other evidentiary purposes (e.g. to prove theft or fraud or some other serious crime) but dragnetting is probably illegal (IANAL! This is not legal advice! Ask a real lawyer about that!).

Sep 28, 2010

Compliance with information security and privacy-related laws, regulations, standards and policies may be a rather dry subject, but it's an increasingly important one and as such is definitely worth covering in security awareness programs - unless, that is, you truly believe that your technical security controls alone are sufficient (in which case, you are either a unique technical genius or sadly deluded!).

We have just delivered a NoticeBored module all about security compliance, some 67Mb of stimulating awareness content that, to be perfectly honest, barely scratches the surface. We freely admit we are not legal experts. We don't know all the ins and outs of our customers' legal obligations, the rules imposed by their industry regulators, or their corporate policies towards security. But we do know about security awareness, about motivation and creativity. And in many ways our international perspective lets us see beyond the narrow confines of any individual organization.

The new security compliance module is designed to inform and motivate staff, managers and IT professionals, three distinct audiences with differing perspectives and needs:

Managers and directors have both strategic and tactical leadership roles and governance obligations in respect of information managment, IT, information security, privacy, and of course compliance.

IT pro's are faced with a confusing mess of technical and non-technical requirements imposed by barely comprehensible laws such as SOX, standards such as ISO27k, corporate security standards written by the egg-heads in information security/risk and security policies written, in the main, by non-technical managers.

Staff just want to go about their jobs. Security compliance is something that crops up occasionally but barely registers with them, unless sufficient effort is made to raise their awareness of, and ideally fulfill, their security obligations.

Compliance is such an important security awareness topic that we are offering the complete NoticeBored module at a flat price of US$895, as well as providing it to our annual subscribers of course. Please contact us if security compliance is important to you, and if you'd like our help to make it equally important to your work colleagues.

Sep 24, 2010

Bob Carr, CEO of Heartland Payment Systems, spoke openly about their massive 2007/2008 security breach at the SC World Conference in 2009. Whether you work in the financial industry or in information security, it's well worth setting aside 45 mins or so to watch him present and think carefully about the underlying risk, security and commercial issues.

Essentially, Bob's point is that the payment card industry is clinging to a fundamentally flawed security model. Card numbers taken from magstripes, or presumably from chip-n-PIN cards, are passed through the point of sale systems, the merchant back-office systems, and card processors such as Heartland, all the way to the card issuers. For a good part of this journey, the card numbers are unencrypted and hence are vulnerable to being captured by the bad guys. PCI DSS attempts to lock down all these intermediate points, but so long as the underlying data are in the clear, there is always going to be a risk of unauthorized or inappropriate disclosure.

He contrasts this with ATM PIN codes which are encrypted as they are entered, decrypted and re-encrypted with the issuer's PIN in a physically secure security module, and so pass through the downstrream systems and networks encrypted. Leaving aside various security concerns such as card skimmers, compromised PIN pads and compromised security modules, this approach does at least take the burden off the ATM companies and retail banks: they don't have clear access to the PINs. The encrypted PIN data stream is just another chunk of data to be moved around. No worries.

Towards the end, Bob also makes some deeply worrying comments about the PCI DSS QSA (Qualified Security Assessor) audit process. The QSAs are basically checking PCI DSS compliance, a checklist of 280 items according to Bob, for a fixed price. If they suspect security issues beyond the narrow scope of PCI DSS, it's not in their interests to explore further as a typical internal or external IT audit might do: not only do the QSAs want to retain their customers (which means, of course, giving them a clean bill of health, i.e. "You are fully compliant" ), they also need to move on to the next fixed-price job ASAP.

There's a lot more security issues swimming around just under the surface layer of this presentation but I won't spoil it for you. Put your feet up, click the link and think on.

Sep 6, 2010

Remarks towards the end of a blog piece by Andy Ellis reminded me about a key difference between awareness and training. He and I may be concerned with information security awareness specifically but the principle is not limited to a single topic. Safety awareness is not the same as safety training. Being commercially aware is different to undergoing commercial training courses. You get the point.

Andy said:

"But much more importantly, we weave security awareness into a lot of activities. Listen to our quarterly investor calls, and you'll hear our executives mention the importance of security. Employees go to our all-hands meetings, and hear those same executives talk about security. The four adjectives we've often used to describe the company are "fast, reliable, scalable, and secure". Social engineering attempts get broadcast to a mailing list (very entertaining reading for everyone answering a published telephone number). And that doesn't count all of the organizations that interact with security as part of their routine. And that's really what security awareness is about: are your employees thinking about security when it's actually relevant? If they are, you've succeeded. If they aren't, no amount of self-enclosed "awareness training" is going to fix it. Except, of course, to let you check the box for your auditors."

Though not using the actual term, he's talking about achieving a widespread culture of security throughout the organization, and in fact in a still wider sphere taking in its customers, business contacts and even dare I say its auditors. You can't put all those people through security training as such, but you can create a level of awareness. As he puts it, 'weaving security in to routine activities' is one way to make it an inherent part of the organization's fabric. Here's a few more suggestions:

Informing and motivating managers, and indeed other influential/powerful people (like auditors) to pay attention to information security matters, and pass on their concern to staff ('walking the talk' and 'leading by example' actually work!);

Encouraging IT professionals to support the cause of information security when interacting with IT systems and, yes, even with real living, breathing people;

Using marketing, advertizing and promotional techniques to create a security brand, ideally forming an integral part of the organization's overall branding, positioning and corporate image;

Making the campaign an ongoing, continuous, year-round program of awareness activities, helping to embed and reinforce the cultural change as a permanent fixture, not a one-off event just to satisfy compliance obligations.

Summing that all up is the concept of osmosis, essentially steeping the entire organization gently in a warm bath of information security so that everyone gradually absorbs the messages. Slowly, behaviors change to follow changing attitudes, and before you know it, you have a security culture.

Sep 4, 2010

This morning’s strength 7.1 earthquake in central Christchurch, South Island, New Zealand, is a reminder that contingency and continuity plans are not just tedious red tape. With the IsecT office being hundreds of miles away in North Island NZ, we didn’t feel the earth move as such but we certainly felt the shock on seeing the news. It leaves us wondering about our own readiness to survive a similar disaster, not least because of our proximity to Napier, another NZ city devastated by a similar quake in the 1930s. Today it's a fabulous Art Deco city having been almost entirely rebuilt. In the 1930s, it was a scene of death and destruction.

From a security perspective, the Christchurch quake is an awareness opportunity. Carpe diem (seize the day)!It's all over the news. Employees can see for themselves what a real incident looks like and, with a bit of judicious prompting, imagine themselves in just such a disastrous situation, struggling first to survive and then to recover. This is not merely 'ambulance chasing' but a genuine chance to help colleagues consider and probably improve the contingency and disaster recovery plans, including their own personal plans e.g. who would you contact first, and how? Given that you could be anywhere when it happened, where would you go? What things would most help your survival and recovery, and do you actually have them to hand right now?

We encourage NoticeBored customers to dust-off the contingency planning awareness module released in February 2008. There are briefings, presentations and posters in there you can use immediately. We will be updating and re-issuing the module shortly.

Those of you elsewhere on the Pacific rim are probably already thinking about your own earthquake, tsunami and volcano survival plans, but in fact the principle is universal. We encourage you all to get your colleagues talking about the incident and imagining something equally dramatic happening to them - a major fire, flood, bomb, storm, IT meltdown or some other serious crisis. What would be your first priority - communicating with friends and family, probably, but how will you actually do that if the landlines and cellphones are out? If infrastructure services are badly disrupted, what would you actually do? How would you cope?

Remember, now is the time to prepare for disaster: when the walls are falling, the tide is rising and the flames flickering, it’s too late to pop down the road for bandages, food and water ...

We are re-checking the office "eathquake kit" and disaster plans today and thinking more seriously about additional backup datacommunications facility such as 3G USB modems. We already have emergency two-way radio capabilities, backup mains power, emergency water and food supplies, even spare IT facilities, but perhaps a tent would be worthwhile too if the buildings are damaged or unsafe. Living rough under a tarpaulin for more than just a night or two would be hard going, especially in a miserable wet cold winter like this one. I don't want to look back and think "If only I had arranged a decent shelter"!

What if anything are you doing about contingency and disaster planning in the wake of the Christchurch quake? We'd love to hear back from you.

Aug 31, 2010

Phone companies know where their customers' cellphones are, often within a radius of less than 100 feet. That tracking technology has rescued lost drivers, helped authorities find kidnap victims and let parents keep tabs on their kids. But the technology isn't always used the way the phone company intends. One morning last summer, Glenn Helwig threw his then-wife to the floor of their bedroom in Corpus Christi, Texas, she alleged in police reports. She packed her 1995 Hyundai and drove to a friend's home, she recalled recently. She didn't expect him to find her. The day after she arrived, she says, her husband "all of a sudden showed up." According to police reports, he barged in and knocked her to the floor, then took off with her car. The police say in a report that Mr. Helwig found his wife using a service offered by his cellular carrier, which enabled him to follow her movements through the global-positioning-system chip contained in her cellphone ...

Wall Street Journal, 3rd August 2010

[Thanks to Monty Solomon for reporting this via the RISKS mailing list.]

Aug 30, 2010

According to Domain-B, Deloitte's information security of 60+ Indian organizations raised an interesting point:

"Optimistically, information security awareness and training is among the top three security initiatives indicated by the resspondents [sic]. However, most security awareness programmes start with an e-learning module, which raises awareness and knowledge, but does not necessarily alter behaviour."

It amuses me that so many organizations think they can just splash out some money on an e-learning package about information security, and that's it. Compliance box ticked. Management off the hook. They've 'done something'. Let's all live happily ever after.

I'm not saying that e-learning packages are worthless, quite the opposite in fact. They are a valuable part, supplement or addition to a comprehensive security awareness program, the point being that, taken in isolation, watching a somewhat stilted video session and maybe answering ten lame security questions is only good for compliance with equally lame laws, regulations and contractual commitments that don't specify an effective awareness program. It will not magically make your employees act more securely overnight, making a big splash in their lives. Without the support of other suitable security awareness activities and materials, it will barely create a ripple.

The e-learning packages I've seen on the market are not cheap, and the costs escalate further if you want customized content specific to your organization rather than the purely generic, bland and often out-dated stuff usually provided. If this purchase sucks the guts out of your security awareness budget, you're in trouble.

Mind you, if your idea of security awareness was a stern once-a-year lecture to staff by A Big Nob, then e-learning would definitely be a step up. So would creating a security incident just to make people aware that they are vulnerable, or forcing everyone to sign a piece of paper that says they know of the existence of the security policies. If you are purely doing this for compliance reasons, these are all probably good enough. They won't, however, actually make your information assets any more secure in a real sense.

The thing that is desperately missing from e-learning packages is the human interaction that comes from putting a decent presenter/teacher/trainer/awareness expert up in front of a class of adults - or a team meeting - or a board meeting - or whatever. They can not only spout the stuff on the slides but react to the audience, take questions and comments, and most of all turn those little sparks of interest and enjoyment into the flames of passion. Motivation is a very personal thing. Think about this the next time you see an evangelist on any topic doing his/her thing on stage. Their energy and ethusiasm is infectious, and the central message is memorable. If they're good, people will be thinking and talking about the experience for days if not weeks afterwards. Would you be quite so excited about having completed an e-learning module?

As a profession, I'm sure we could learn much more from the evangelists, sales people, motivational speakers and even passionate politicians.

Regards,
Gary

PS I would have preferred to cite the Deloitte report directly if only I could locate it on the web ... sorry.

Aug 28, 2010

Aren’t wireless networks wonderful? So convenient to use, flexible and cheap to deploy, they’re great! No longer are we tied to our desks by the network, keyboard and mouse cables. Wireless technologies enable laptops and other mobile computers to be connected to the corporate networks and the Internet, while distant locations can be linked-up using microwave radio over point-to-point or satellite links. Travelers use public WiFi hotspots or 3G USB sticks to keep up with email and social networks while on the move, and use GPS geolocation/mapping systems to find their way. Organizations use RFID tags to monitor valuable items, track their mobile inventories and manage logistics. Most of us these days rely heavily on our mobile phones and PDAs which are, in fact, sophisticated digital radios using the 3G and other wireless networks. Many of us have Bluetooth headsets and other gizmos. Wireless is literally all around us.

While wireless technologies have tremendous business and personal benefits such as convenience and ease of use, there are some serious information security risks that need to be adequately addressed to avoid eroding or completely negating the benefits. Simply buying a WiFi access point from a local retailer, plugging it into the network and carrying on as before is probably not A Good Idea as far as network security is concerned, yet this is pretty much how many home WiFi networks are set up in practice. Scary!

Hackers enjoy the benefits of wireless technologies too, whether that’s connecting to the Internet via someone’s insecure WiFi setup or via a 3G modem. WEP and WPA encryption schemes and MAC address filtering are no real impediment to WiFi hackers intent on stealing credit card numbers from retail outlets, while insecure Bluetooth headsets are evidently an open invitation to snoop on the conversations of random passers-by. Furthermore, radio interference whether accidental or deliberate can disrupt wireless circuits.

This month's NoticeBored security awareness materials explore the information security gotchas undermining a variety of widely-used wireless technologies, discussing the security countermeasures necessary to bring the risks under control without destroying the undoubted business benefits that wireless brings. Click here to find out more.

Aug 16, 2010

Rebecca Herold has written an excellent list of typical physical security issues in the average office, or indeed other information-rich workplaces. She suggests conducting physical security reviews out-of-hours. I must say that I have done this kind of review hundreds of times myself, as part of "installation audits" using ISO/IEC 27002 as a benchmark for the kinds of controls expected. Doing them in the daytime or out-of-hours makes little difference - if anything, during the daytime the number of issues is magnified by the things employees typically do while at work, such as:

Leaving work-in-progress all over their desks and screens, not just while they are actively working on it but while they go to coffee or lunch;

Leaving desks, filing cabinets, and even safes open;

Chatting merrily away to each other on on the phone about sensitive personal or commercial matters, with no regard to who else might be listening;

This kind of stuff makes good photographic evidence for the audit report and presentation to management, along with photos of open doors, leaky patches, overloaded wiring, poor signage, excessive flammable materials, blah blah blah.

Exposing such large amounts of valuable commercially- and personally-confidential to risk represents a substantial vulnerability to industrial espionage, sabotage, information theft, privacy, health-and-safety and more. Individually, these are mostly rather trivial issues. Collectively, however, a the risk accumulates if these matters are not brought to management's attention and proactively addressed, on an ongoing basis. The clear-desk/clear-screen policy, for example, can make a big difference but only if managers take the trouble to drive up compliance, including setting a good example themselves.

An article in Psychology Today, of all places, recounts several more old industrial espionage stories, making the point that this cloak-n-dagger stuff has been going on for thousands of years. Major incidents have changed the course of history.

All the Tea in China recounts a nineteenth Century industrial espionage story, concerning the British plant collector Robert Fortune. Fortune collected (stole?) tea plants from China to launch the British tea plantations in India, so ending the Chinese stranglehold on the world's supply of tea.

Get serious about industrial espionage. Clarke says many companies aren't aware of how common trade-secret theft has become, partly because the federal government doesn't keep track of the financial consequences. He says the U.S. needs to be more like the U.K. More than a year ago, the security agency MI5 told the biggest 300 companies in Britain to assume their computers had been hacked by the Chinese and then met with executives to discuss the breaches it knew about and how to prevent future ones.

As with many other US authors, the implication seems to be that US readers should be concerned about foreign competitors, while seemingly ignoring the threat from those nearer home. I find this rather xenophobic but typically American position strange. The reality is that competitive intelligence and industrial espionage techniques are used by all the industrial nations, and most likely a high proportion of the third world too. US companies should be concerned about spies and infiltrators from all sources including insiders, other US and foreign companies, home and foreign governments, the criminal underworld, 'analysts', hackers and 'free agents' who will happily exploit valuable information on anybody/anything to make a fast buck. It's not all about the Chinese.

Information obtained and disclosed through networks of moles, friends and acquaintances

Use of helicopters to spy on a rival's road tests

Intelligence functions within the organization

Social engineering

Hidden microphones & cameras

'Clandestine visits to sensitive places'

Reverse engineering i.e. dismantling a new vehicle to find out how it is made

[That's a far from exhaustive list. I wrote about others in our latest newsletter and awareness materials.]

I find it intriguing that stories of this nature have been circulating for years. There's one on the go now about Chery and GM. On the rather weak basis that there's no smoke without fire, there does seem to be a particular fascination with industrial espionage in the auto industry. Why is that, I wonder? Perhaps for some reason the people involved in the industry are more 'ethically challenged' than others (I find that rather hard to believe!). Maybe the sheer industrial scale of automotive manufacturing makes it difficult to secure the plant and the people against this cloak-and-dagger stuff (true, but the auto industry is hardly unique in this regard). Or is it just that the stories catch the fertile imagination of the motoring press, making a positive feedback loop that implies a general acceptance and widespread use of such underhand techniques?

High-stakes commercial competition between the main manufacturers is probably a contributing factor: it costs a large fortune to design and develop a new car design, and each manufacturer relies on a rather small range of models for their ongoing commercial success. But again, the auto industry is hardly unique: many other industries and markets are just as intensely competitive, if not more so.

I wonder whether national interests play a part? Such massive industrial enterprises are undoubtedly strategically and economically important to the countries that have them, so it is conceivable that nation states might tacitly accept if not condone and support the use of industrial espionage. The same would surely apply to the aerospace and defense industries, and others such as pharmaceuticals, finance, hi-tech, utilities ('critical national infrastructure') and more ... come to think of it, I've worked in all of those industries and can't recall any such incidents in the course of my career. Either I have led a very sheltered professional life, or it has been going on right under my nose all these years ... or perhaps it is just not as common in practice as the news media would have us believe.

Jul 31, 2010

We often read about security incidents involving personal information in the newspapers or online. Multi-million dollar credit card and social security number exposures grab the headlines and consume many column inches. There are even websites dedicated to totting-up the sordid numbers. There are laws and regulations to protect personal data, and most of us accept that our privacy is inherently worth protecting, no question.

When it comes to protecting confidential proprietary information belonging to corporations, however, the situation is less clear. Someone taking, say, their former employer’s customer list to a new job may be ‘frowned upon’ but evidently this practice is often tolerated and is probably fairly common in practice. Indeed professional résumés boast of prior work experiences and major projects, with the implication that proprietary knowledge and expertise gained on prior assignments is effectively for sale to the highest bidder.

News stories involving industrial espionage are few and far between. Why is that? It’s conceivable that there are not many incidents, but it seems far more likely that most simply don’t see the light of day – in other words, they are kept under covers or quietly hushed-up, or perhaps they are just not identified as such. As with personal data breaches, organizations are understandably reluctant to admit their security failures and discuss the vulnerabilities that were exploited, knowing that they reflect badly upon them and detract from their brands. Possibly some fear that revealing incidents risks disclosing yet more of the proprietary information in question, or encouraging further attacks. Without the legal pressures that force disclosure of many privacy breaches, organizations are within their rights to say nothing and evidently this is the most favored option in practice.

Our latest NoticeBored security awareness module explains the value of the information assets at risk and the myriad ways in which they may be threatened, and calmly describes the corresponding security controls. We use diagrams, mind maps, photos, news cuttings and motivational writing to encourage people (specifically staff, managers and IT professionals) to take this seriously and change the way they behave. Please contact us for more information about the NoticeBored awareness subscription service. And hurry up before your competitors steal your trade secrets thanks to unaware employees.

Jul 26, 2010

David Lacey’s book concerns the influence of people in protecting information assets and is excellent value.

It covers a surprisingly wide range of topics relating to the human aspects of information security, mostly from management and operational perspectives. The book has depth too, while remaining generally pragmatic in style.

I highly recommend the book for all information security professionals, particularly CISOs and Information Security Managers who are not entirely comfortable with the social elements of information security, and for information security MSc students who want to boost their understanding in this area. The book is particularly valuable also for information security awareness and training professionals who necessarily deal with human factors on a daily basis, and need to understand how best to work with and influence their organizational cultures.

Jul 21, 2010

An email from Garrison Continuity pointed me to a neat 2-page Adobe PDF file with tips to ensure that business continuity arrangements won't falter as many employees will soon be on holiday.

Truth is, the holiday period thing is just a timely prompt to ensure the arrangements are sound: the plans should be checked and exercised periodically throughout the year. It's one of the regular activities for the Business Continuity Manager, providing additional assurance that the plans will function properly whenever a major incident strikes.

Jul 16, 2010

A new 'how to' piece on eHow.com titled Information Security Awareness & Training is curiously deficient. I'm puzzled that someone who presumably feels they have expertise in the subject would write such a piece that refers almost exclusvely to technical IT security controls. There is no significant mention of human factors, nor any pragmatic help on how to plan, organize, develop, deliver, measure and maintain an infosec awareness & training program. It's so bad, I hardly know where to start criticising it.

Blog comments are welcome but, unlike eHow.com, there's no need to register your details or commit to writing more content.

Jul 12, 2010

Regardless of whether your security awareness program is barely off the ground or has been running for a while, we all come up against barriers from time to time. It can be very dispiriting for those of us tasked with “doing awareness”, leading to a drop in our morale and energy but fear not brave awareness person! With a bit of creative or lateral thinking, there are all sorts of things you can do to bring your program back on track. Here are six ways to tackle those barriers.

1. Hit the barrier head-on
This is exactly what we normally do. We ‘try harder’ and ‘have another go’. Sometimes it works but occasionally, when we’ve hit our heads against the barrier and bruised our ego once too often, we realize it is no longer working and something has to change. This is the trigger to take stock of the situation and plan something different – whether subtly or radically different is up to you.

2. Overwhelm the barrier
This involves more than simply ‘more of the same’. Be prepared to experiment, trying different approaches in areas where things have not gone to plan previously. Think back at what has gone well and what hasn’t (and review your feedback forms if you have been using them), and learn from your experiences e.g. if your awareness seminars often seem to fall flat with poor attendance, try organizing “Q&A sessions” or “brown bag lunches” or whatever instead. Consider inviting outside speakers or charismatic insiders to speak on security topics.

3. Call in the cavalry
Did your CEO or Board of Directors originally support the proposal to invest in an awareness program? Have they since quietly disappeared into the background? Now’s the time to call on their proactive support! Explain that you believe the program is flagging because they are not playing an active part in it, and suggest some practical ways in which they can help. Appeals of this nature are best put face-to-face. In conjunction with your line manager, see if you can schedule a short meeting with the CEO to explain what is going on and seek her assistance. Prepare a shortlist of specific suggestions to answer the inevitable question, “What do you want me to do about it?”

4. Undermine the barrier
Be realistic. Are you overstretching yourself? Maybe it’s too much work to keep up with the variety of topics you need to cover. Perhaps you are spreading your effort too thinly. Take another look at your awareness plan: are there topics you can combine or set aside for a while as you gather your strength? Do you need help from other experts in internal communications, training or security? Even if you cannot secure permanent resources, a heart-felt appeal to your colleagues (including your ‘awareness ambassadors’) may only secure you an hour or two of their time but that may be just enough to set you back on the road to success. Students, contractors and consultants all have their place so don’t get fixated on permanent headcount. Are there pieces of work that eat your time but could be packaged up for someone else to do (like, for example, NoticeBored)?

5. Go around the barrier
Sometimes the problem stems from an individual person or function that always seems to be in the way. Can you identify the blockage? Have you tried to discover what is the reason they are blocking you? Have you discussed things openly with them? Have you tried reasoning and bartering (“If you’ll agree to promote the security awareness program, I’ll give you an hour a month to help with your intranet website”).

6. Take another route
Think parachute drops into enemy territory. Try picking up on other awareness and communications activities in the organization and learning from them. Is there maybe a safety or legal compliance program in place? Are there opportunities to combine efforts on specific topics of common interest? How about planning a joint seminar, or back-to-back seminars? Have they got expertise and ideas you could capitalize on, and vice versa?

Jul 5, 2010

In mid-2009, an employee at the California firm clicked on a link in an e-mail message and ended up at a malicious website. The site, run by online thieves, used a vulnerability in Internet Explorer to load a Trojan horse on the employee's system. With control of the machine, which was used for much of the firm's accounting, the thieves gathered data on the firm and its finances. A few days later, the thieves used 27transactions to transfer $447,000 from Ferma's accounts, distributing the money to accounts worldwide.

"They were able to ascertain how much they could draw, so they drew the limit," said Ferma president Roy Ferrari in an interview at the time.

It was that opening line that stood out for me. Was this incident truly due to the lack of a "proper security policy", in fact? If so, what would that "proper security policy" have said?

I would dispute the article's claim that:

For Ferma, a security policy that forbid surfing on computers used for accounting or resulted in stronger security for such computers would likely have stopped the attack cold.

No policy would have stopped the attack unless (a) employees fully complied with it, and (b) the controls it mandated were sufficiently strong to eliminate all the risks. 'Not surfing on computers used for accounting' would reduce but not eliminate the risk, and it would only provide that limited protection if in fact accounting users never surfed the Interweb on their normal PCs. It would not have prevented incidents that involved other modes of attack, such as social engineering, network worms or Trojan-infected USB sticks. The incident might have been identified if not blocked by antivirus software, firewalls and network monitoring, and additional business controls over the authorization and release of large value transfers. Anti-fraud and money laundering controls at the bank/s could have made the criminals' job harder too.

The truth is that information security almost invariably requires multiple overlapping or complementary controls. To say that this incident was the result of a lack of policy is distinctly misleading.

Jul 1, 2010

A throwaway comment in a convoluted machine-translated blog led me to a fascinating Wikipedia piece about Jeff Cooper, father of the "modern technique" of handgun shooting, in particular the concept of "condition white". Condition white describes the state of mind of someone who is totally oblivious to a serious threat to their personal safety. Cooper used it in relation to situations involving violent assault where the potential victims don't even appreciate that they are in danger and hence are not in the least bit alert to the signs of impending attack. The attacker therefore has the element of surprise.

The Wikipedia piece describes four levels recognized by Cooper:

"White - Unaware and unprepared. If attacked in Condition White, the only thing that may save you is the inadequacy or ineptitude of your attacker. When confronted by something nasty, your reaction will probably be "Oh my God! This can't be happening to me."

Yellow - Relaxed alert. No specific threat situation. Your mindset is that "today could be the day I may have to defend myself." You are simply aware that the world is a potentially unfriendly place and that you are prepared to defend yourself, if necessary. You use your eyes and ears, and realize that "I may have to SHOOT today." You don't have to be armed in this state, but if you are armed you should be in Condition Yellow. You should always be in Yellow whenever you are in unfamiliar surroundings or among people you don't know. You can remain in Yellow for long periods, as long as you are able to "Watch your six." (In aviation 12 o'clock refers to the direction in front of the aircraft's nose. Six o'clock is the blind spot behind the pilot.) In Yellow, you are "taking in" surrounding information in a relaxed but alert manner, like a continuous 360 degree radar sweep. As Cooper put it, "I might have to shoot."

Orange - Specific alert. Something is not quite right and has gotten your attention. Your radar has picked up a specific alert. You shift your primary focus to determine if there is a threat (but you do not drop your six). Your mindset shifts to "I may have to shoot HIM today," focusing on the specific target which has caused the escalation in alert status. In Condition Orange, you set a mental trigger: "If that goblin does 'x', I will need to stop him." Your pistol usually remains holstered in this state. Staying in Orange can be a bit of a mental strain, but you can stay in it for as long as you need to. If the threat proves to be nothing, you shift back to Condition Yellow.

Red - Condition Red is fight. Your mental trigger (established back in Condition Orange) has been tripped. If "X" happens I will shoot that person."

It occured to me that it might be illuminating to reinterpret Cooper's color code in the information security context:

White - Unaware and unprepared. If attacked in Condition White by, say, some malware, a social engineer or hacker, the only thing that may save you is the inadequacy or ineptitude of your attackers. When confronted by something nasty, your reaction will probably be "Oh my God! This can't be happening to me", a state psychologists call 'denial'.

Yellow - Relaxed alert. No specific threat situation. You are simply aware that the virtual world is a potentially threatening place. You use your eyes, ears and security software to look out for digital threats, realizing that "There are almost certainly threats out there." You should be in Condition Yellow whenever you are in unfamiliar surroundings, such as exploring different websites or handling email from people you don't know. You can remain in Yellow indefinitly, just as long as you are mentally able to stay sufficiently alert. In Yellow, you are constantly "taking in" details about information flows and situations that unfold before you in a relaxed but alert manner, like a continuous 360 degree radar sweep. If you are too tired or distracted to keep up your guard, you should avoid risky behaviors, becoming more conservative in your online activities. With practice, however, Yellow gradually becomes your default state of mind.

Orange - Specific alert. Something is not quite right and has gotten your attention. Your radar has picked up a specific information security risk. You adjust your primary focus, assessing the threat to determine what is going on, whether you are vulnerable, what might be the outcome if so and hence whether it is a genuine risk. Your mindset shifts to "Looks like I am being scammed and/or my information is being compromised," focusing on the specific target which has caused the escalation in alert status. In Condition Orange, you set a mental trigger: "If that goblin does 'x', I will definitely need to report a security incident and seek help." Staying in Orange requires some concentration but you can stay in it for as long as you really need to. If the threat comes to nothing, you shift down to Condition Yellow, though you may still call the Help Desk to report your suspicions.

Red - Condition Red is fight-or-flight. Your mental trigger (established back in Condition Orange) has been tripped. You and your information assets are definitely under attack, or have already been compromised. You definitely need to call the Help Desk urgently to report the incident and take their advice on what to do about it.

Our security awareness materials aim to bring the whole organization up to Condition Yellow and maintain it at that minimum level, while at the same time giving people the ammunition (the knowledge, understanding and skills) to (a) appreciate when things might be turning Orange or Red, and (b) react appropriately if they do.

As an information security professional, I find myself at Orange or Red most of the time yet, despite the occasional tinges of paranoia, it's a relatively happy place for me. This phenomenon seems to set infosec pros apart from the crowd. I guess hackers are also comfortable in Orange or Red, with the additional motivation of creating or exploiting vulnerabilities rather than just finding and fixing them.

Oh and by the way, hackers have virtual enemies too. There's no honor among thieves.

In fact, all organizations should must be constantly on their guard against social engineering attacks, contest or no contest. If the contest serves to raise awareness of the widespread, easily exploited vulnerabilities created by naive and unattentive people, then I am all in favor of it. Good on yer! There should be one every month! A big one, with headline coverage in all the news media! With special prizes for the organizations that successfully resisted the social engineering attacks for a specified period!

The announced contest is very restrained, with pre-set rules that limit the target organizations, the nature of the attacks and the types of information to be exploited. Anyone who believes criminal hackers using social engineering techniques outside of the artificial contest situation would respect such arcane rules is deluded. That's the real take-away lesson from this contest and the furore that surrounds it: if a bunch of social engineers really threatens your corporate information assets under the strict rules of the contest, then oh boy are you vulnerable to unethical attackers.

"Financial institutions need to educate users regarding their security roles and responsibilities. Training should support security awareness and strengthen compliance with security policies, standards, and procedures. Ultimately, the behavior and priorities of senior management heavily influence the level of employee awareness and policy compliance, so training and the commitment to security should start with senior management. Training materials for desktop and workstation users would typically review the acceptable-use policy and include issues like desktop security, log-on requirements, password administration guidelines, etc. Training should also address social engineering and the policies and procedures that protect against social engineering attacks. Many institutions integrate a signed security awareness agreement along with periodic training and refresher courses."

I'm relieved that they don't actually say "annual awareness training courses" there at the end, but unfortunately I'm sure that's how many of their more naive clients will interpret the advice. Annual courses are patently NOT the way to raise security awareness. They have never worked as intended, as anyone who has either run them or been forced to attend will surely agree. The change to ongoing/rolling security awareness programs makes all the difference. So if "periodic" actually meant "continuous", I'd support the FFIEC advice.

What do you make of the social engineering contest? Do you think it helps or hurts the cause for better information security? Comments are very welcome.

Jun 30, 2010

The rôle of human beings is arguably the most important topic in information security. July's security awareness materials explain what it means to develop a “culture of security”, for example changing employees’ attitudes towards security, encouraging them not to tolerate insecurity but to comply with security policies and make things secure by design, and in general encouraging employees to behave more securely in whatever they are doing.

This was an enjoyable module to write, being our home territory, and I hope the topic resonates with our customers.

Reducing the number and/or severity of security incidents (compared to a culture of insecurity) is the aim, of course, which is about creating genuine business value. Cost-effectiveness makes a huge difference between improving security by changing employee behaviors compared to changing IT systems and implementing additional technical controls. Security awareness and training activities are significantly cheaper than new technologies. Best of all, they help the organization get more value from its technical controls too - a double winner.

Jun 25, 2010

Replace "coins" with "valuable assets" in this item about coin dealers needing to protect the coins in their vehicles and the advice is sound - well maybe. Actually, the best advice is never to leave such attractive and portable valuables in an unattended vehicle but sometimes that may in fact be a safer option than carrying them with you, for example to a busy street cafe where bag snatchers ply their trade.

The same applies to laptops, PDAs and the other paraphenalia of modern life. Don't forget that the value of the stored information may far exceed the hardware value, which is attractive enough in its own right to thieves.

Thieves who break into a car to steal briefcases, laptop bags and fancy electronics are going to be in and out in a flash. They have no qualms about smashing a window, forcing a doorlock with a screwdriver, jemmying the door open or using a thin flat piece of metal to pop the lock. Anything you can do to slow them down will help, while strong security is a deterrent for casual thieves. Not leaving valuables obviously on show is arguably the simplest control - stuff hidden away is not going to attract the attention of a passing druggy. And key to that is the awareness that your stuff is extremely vulnerable, so don't drop your guard.

Why don't cars come with welded-in laptop safes or strongboxes, or at the very least those strong metal hoops allowing valuables to be secured in place with cable locks? Answers on a postcard please.

Jun 21, 2010

Here's a little challenge for you: is this email a legitimate, if somewhat reckless attempt at email marketing by McDonalds, or a cynical phishing attempt aimed at McD's more gullible patrons? [Click the image to see it full-sized.][And yes, it is safe to click the image in the blog, but probably not a good idea to visit the URL shown in the image. Trust me.][Bonus marks if you checked the URL in this blog before clicking it.][Extra bonus marks if you are running a relatively secure web browser in a hardened virtual machine, and are not that bothered about clicking links ...]

I'm lovin' it.

Regards,
Gary

[PS "Chups" are what my Kiwi colleagues buy from the excellent "fush and chup" shops here. McDs probably call them French Fries but they are about as French as I am.]

Jun 20, 2010

"Incident management plans can be considered one element of a company's risk insulation policy. If risk management is considered in terms of the insulating layers surrounding a live wire (the program or business activity), each layer of mitigation or management affords an additional level of protection against disruptive or harmful influences—thus making a risk pass through several layers of protection or defense prior to being able to cause harm"

That's a neat description of the well-known principle of 'defense in depth', which is one of a handful of high level and very broadly applicable information security principles.

I have an abiding interest in such high level principles, triggered by trying to compile a set of them to guide the development of our information security policy based on ISO/IEC 27002. The ISO standard presents a set of 39 control objectives which were relatively easy to convert into a corresponding set of 39 policy axioms, but needing an even more succinct and generic set of principles as well, I compiled a shortlist of 7 along the lines of: compliance with ISO/IEC 27002; information deserving protection as a valuable asset; controls needed to protect the confidentiality, integrity and availability of information; information security being pervasive throughout the entire organization; defense in depth ... and so on. Personally, I wish ISO/IEC 27002 would incorporate a set of high level principles of this nature but ISO/IEC JTC1/SC27 has such ponderous and tortuous development processes that there's little realistic chance of this happening - unless someone finds a decent set of published infosec principles that would give us a head start.

Jun 10, 2010

"The incident management plan (IMP) is a generic and tactical component of the Business Continuity Management (BCM) Plan, offering pragmatic guidelines and responses to support immediate crisis events across a wide spectrum of risk issues as more mature and comprehensive measures are brought into play—typically meeting the needs of the first 24 to 72 hours of a crisis event."

Aside from the confusion of terms (exactly what is a 'crisis event'?), I support the author's key point about incident management spanning the gap between crisis management and continuity [and recovery] management.

In the same vein, information security incident management is but one form of incident management, which has a useful spin-off: if your organization doesn't already have a properly-thought-through process for managing information security incidents, there's a chance you might be able to tap into or plagiarise the processes used to handle other types of incident, for example health and safety incidents and perhaps even generic commercial events such as becoming a takeover target.

Unfortunately, if your organization doesn't have those either, you're out of luck mate! Managing information security incidents in a controlled and rational manner may be the least of your worries.

Just had to share this with you - a good website and blog about social engineering. I'm using it as part of the research for our forthcoming awareness module on human factors in information security, an enjoyable job this month.

Jun 9, 2010

ISACA has released an audit program or checklist to guide IT audits or reviews of the processes and systems supporting the management of information security incidents. It is free to ISACA members. It offers a well structured suite of issues to review and questions to ask. I like the fact that it identifies relevant requirements from COBIT and COSO, and covers the forensic aspects of incident investigation.

However, despite being 43 pages long, it almost completely ignores the final stages of a good practice incident management process where the organization applies the learning from incidents (including near misses) in order to improve its controls. This significant weakness in the audit program presumably stems from a corresponding weakness in COBIT and COSO, or at least the failure of the checklist's author to identify and address those requirements. Naturally our own incident management Internal Controls Questionnaire (supplied as part of this month's NoticeBored security awareness module) covers the entire process, and would suggest a number of additional checks and questions to add to the ISACA checklist for a more comprehensive review.

A significant information security incident came to light when Google was accused of secretly scanning WiFi signals and collecting data that some (most?) would consider private - details such as the WiFi SSID (network name) and MAC addresses (network addresses) of the WiFi devices - while its researchers were systematically driving around some 30 countries photographing places for its StreetView service (which itself raises some serious privacy issues).

Google's initial response to the incident was to deny it but the incident grew more serious as the news media and various privacy commissioners got wind of it. When Google admitted that it had been collecting WiFi data, it tried to underplay the significance: "Why did you not tell the DPAs that you were collecting WiFi network information? Given it was unrelated to Street View, that it is accessible to any WiFi-enabled device and that other companies already collect it, we did not think it was necessary. However, it’s clear with hindsight that greater transparency would have been better."

Later, Alan Eustace, Senior VP of Engineering and Research for Google, said "The engineering team at Google works hard to earn your trust—and we are acutely aware that we failed badly here. We are profoundly sorry for this error and are determined to learn all the lessons we can from our mistake." while CEO Eric Schmidt admitted Google "screwed up" and blamed a software engineer for writing the "rogue code" in 2006. This illustrates the power of accountability and governance, but this incident is not over yet.

Now, despite Google saying it has deleted at least some of the WiFi data in question, a number of countries are still considering taking legal action over the incident, under privacy and/or unauthorized interception of communications laws ... in other words, the incident has not yet been contained, let alone resolved.

All in all, this would have made an excellent case study for our awareness materials on incident management, and might yet do so in a future revision of the module. The incident clearly also has value on the privacy and wireless security topics.

If a similar incident had beset your organization, how would your management have handled it? What would you have done differently to Google? Discuss.

Hot topic

NBlogger is ...

Dr Gary Hinson PhD MBA CISSP has an abiding interest in human factors - the ‘people side’ as opposed to the purely technical aspects of information security. Gary's career stretches back to the mid-1980s as both practitioner and manager in the fields of IT system and network administration, information security and IT auditing. He has worked and consulted in the pharmaceuticals/life sciences, utilities, IT, engineering, defense, financial services and government sectors, for organizations of all sizes. Since 2003, he has been creating security awareness materials for clients (www.NoticeBored.com) and supporting users of the ISO27k standards (www.ISO27001security.com). In conjunction with Krag Brotby, he wrote "PRAGMATIC security metrics" (www.SecurityMetametrics.com). He is a keen radio amateur, often calling but seldom heard by distant stations on the HF bands.