Welcome to NBlog, the NoticeBored blog

Nov 30, 2015

In line with common practice, we've covered "information security risk" previously in the NoticeBored security awareness materials. Virtually all the awareness modules cover information security, so this time around we've refocused the module on information risk, information risk management (IRM) especially.

The diagram below sums up the guts of the classic IRM process: identify then assess information risks, choose how to treat them, implement the treatments, then loop back to pick up and respond to changes.

There's more to it than that, for instance information must flow to and from management (e.g. information risk levels, business priorities and risk appetite) while suitable metrics are necessary to manage and improve the process systematically.

Talking of which, I'm currently reading a fascinating account of how High-Reliability Organizations (HROs) use Highly Reliable Security Programs (HRSPs) to drive improvements in their information risk and security management activities. The book's author (Lance Hayden) lays out a strong case for milking every last drop of value from incidents and near-misses (or near-hits, as he calls them!) rather than - as most organizations do - paying lip-service to incident investigation, hoping that everything is quickly forgotten ... and consequently suffering the same or similar incidents repeatedly. As a fan of security metrics, security culture, systematic learning and improvement, the core idea resonates with me. I'll be reviewing "People-centric Security: Transforming your Enterprise Security Culture" here as soon as I've finished the final chapters and mulled it over. It's certainly food for thought, so on that basis alone the book is well worth a look.

Nov 27, 2015

A new public alerting scheme for terrorism was introduced in Australia this week, with the 5 color-coded levels shown here. The previous scheme, introduced in 2003, had 4 levels (low, medium, high and extreme), primarily reflecting the scale or severity of the threat. The new scheme's levels primarily reflect the probability of an attack.

I'm puzzled because, as generally understood, risk reflects both aspects - the likelihood, probability or chance of an incident coupled with its scale, severity, consequences or impact.

With the new system, even if a threat is deemed "certain" and coded red, the scale gives us no idea of the likely scale of the incident/s. Are we talking about a lone gunman on the rampage in one location, a coordinated series of attacks across a number of locations, or what?

Nov 26, 2015

With hindsight, perhaps it wasn't such a bright idea for an information security company to send out an email promoting phishing awareness, encouraging its readers to click an embedded blog link ... pointing to a different domain than the address of the sender of said email:

Nov 24, 2015

If you had the requisite access, skills and opportunity to defraud or otherwise exploit your employer (which, I suspect, many of us in this profession do), would you be tempted to take advantage? Not even a tiny bit? What if the ‘social contract’ with your employer was seriously strained for some reason - something had soured the relationship, putting your nose out of joint, once too often?

If you were so inclined, how much effort would you be willing to expend to 'get your own back'? Would you feel justified in causing material harm? Would you be willing to break the law? Would it matter if you worked for a bank, the government, a charity or family business?

And how cautious/subtle/sneaky would you be about it? What if the potential prize on offer was, say, more than $10 million: how tempting would that be? How much caution and risk mitigation would $10m buy you?

Stories like that make me wonder idly about my personal integrity and ethics. If everyone has their price, what’s mine? Despite my high ideals and glinting halo, I suspect, regretfully, that it is very, very large ... but probably not infinite. I'm human, after all.

Even raising and contemplating the remote possibility makes me feel very uncomfortable in my own skin, but that's not a good reason to ignore the risk. I'm not talking about straightforward greed, malice and criminality here. We obviously need to deal with those but bad apples in our midst pose different challenges.

How on Earth can our organizations reasonably expect to counter such unlikely but severe insider threats? The guy in the news story made fundamental information security errors, clearly, but how many others slip through the net or, worse still, sneak under the radar without their indiscretions ever being discovered?

Thinking back over my career in information security and IT audit, I’ve worked with some absolutely superb, consummate professionals to whom I’d happily trust my life (literally), plus a few distinctly dubious characters who couldn’t sell me a used car … and, to be frank, I don’t know which category worries me the most. My character assessment abilities have proven pretty good on the whole but definitely not perfect. I've been taken-in by fakes and fraudsters from time to time: social engineering is a powerful weapon. I wonder what I've missed? I wonder who I might have sleighted by doubting their word (just doing my job, you understand, but I do have a conscience).

Nov 20, 2015

Metrics in general are valuable because, in various ways, they support decisions. If they don't, they are at best just nice to know - 'coffee table metrics' I call them. If coffee table metrics didn't exist, we probably wouldn't miss them, and we'd have cut costs.

So, what decisions are being, or should be, or will need to be made, concerning information risk and security? If we figure that out, we'll have a pretty good clue about which metrics we do or don't want.

Decisions relating to threats, vulnerabilities and impacts - evaluating and responding to them;

Decisions made by senior, middle or junior managers, by staff, and perhaps by or relating to business partners, contractors and consultants, advisors, stakeholders, regulators, authorities, owners and other third parties;

Notice that the bullets are non-exclusive: a single metric might support strategic decisions around information risks in technology involving a commercial cloud service, for instance, putting it in several of those categories.

If we systematically map out our current portfolio of security metrics (assuming we can actually identify them: do we even have an inventory or catalog of security metrics?) across all those categories, we'll probably notice two things.

First, for all sorts of reasons, we will probably find an apparent excess or surplus of metrics in some areas and a dearth or shortage elsewhere. That hints at perhaps identifying and developing additional metrics in some areas, and cutting down on duplicates or failing/coffee-table metrics where there seems to be too many which is itself a judgement call or a decision about metrics - and not as obvious as it may appear. Simplistically aiming for a "balance" of metrics across the categories is a naive approach

Second, some metrics will pop up in multiple categories ... which is wonderful. We've just identified key metrics. They are more important than most since they evidently support multiple decisions. We clearly need to be extra careful with these metrics since data, analysis or reporting issues (such as errors and omissions, or unavailability, or deliberate manipulation) is likely to affect multiple decisions.

Overall, letting decisions and the associated demand for information determine the organization's choice of metrics makes a lot more sense than the opposite "measure everything in sight" data-supply-driven approach. What's the point in measuring stuff that nobody cares about?

While listening to a couple of ISSA webinars on security awareness and idly scribbling notes to myself, I've been mulling over the common refrain that 'We just don't have the resources for security awareness'.

One of the speakers said something along the lines of "I've never had the luxury of anyone on the payroll to do security awareness, except me and I'm always busy. I don't think we'll ever have anyone to do it full time, maybe a quarter FTE next year if we're lucky". This is for a healthcare organization with over 20,000 employees. That struck me as a depressing, almost defeatist attitude. I honestly struggle to believe that their management doesn't support security awareness, given how absolutely crucial it undoubtedly is to meet their security and privacy obligations and business challenges. How can they possibly afford NOT to do security awareness? I suspect the real problem lies not so much with management's resistance to the idea but with the lack of push. Too much else on the go maybe. Nobody with the ooomph to build a convincing business case perhaps.

I thought I'd share with you some more optimistic tips on how to make a success of security awareness even if you are resource-constrained:

Figure out what resources you actually have. Hint: it's not zero. For starters, you are thinking about it right now. Your head-space and interest in the topic constitutes a resource. So are your learned colleagues in Information Security plus related departments and teams such as Risk, Compliance, Site Security, HR, Health and Safety, Operations, IT, Training, Employee and Corporate Comms. Friends/supporters of security throughout the organization are resources (security is everybody's business, remember). This blog and a gazillion others are resources. There are websites, professional associations, Google, social media, magazines, newspapers, the TV news and documentaries. There are textbooks and articles, vendor white papers (and courses and collateral and freebies), marketing materials, course books ... Even if you think you have nothing, in reality you have enough to make a start. Actually, you have more than enough: the hard part is sifting though for valuable nuggets, and making good use of the available resources. So please don't pretend you are a pauper. You are information rich, time poor maybe but hardly destitute.

Beg, steal or borrow even more resources. Hustle. Horse-trade. Collaborate with your colleagues - the pro colleagues noted above plus 'management' and last but not least 'staff' (you are proactively investing in your personal social network throughout and beyond the organization, right? ....) If you can, call upon, dip into or exploit other departments' resources e.g. the training budget; the new employee orientation budget; the corporate comms budget; the intranet/web development budget; business and IT project budgets ... Use interns and temps. Call in favors and offer your skills and expertise to those who need it (every such interaction is an awareness opportunity). Use internal surveys, competitions and challenges to both engage your workforce and develop additional content (anecdotes and case study materials, for instance) and metrics. Find people who are good at what you need. [Blatant plug: Farm out the hard graft of researching and preparing creative content to security awareness professionals who relish the opportunity. It's cheaper, easier and more effective than doing it all yourself!]

Milk every last drop of value out of the resources you do have. Work your resources harder. Get creative. Challenge and encourage colleagues to come up with good ideas. Prioritize. Consider the value of the activities you are currently doing and planning/thinking about. Invest in things that will deliver value over the long term rather than just spending on short-term fixes for immediate needs. Scrimp and save, manage your resources. Squeeze the slack out of other activities and divert/redeploy the funds and other resources towards more cost-effective stuff. Play the games that people play. If you must, overspend on things that management can't reasonably deny are important. Watch the pennies. Track the value.

Measure and improve systematically. Use maturity measures, surveys and other metrics to get on the front foot. Instead of lamely alluding to progress, success and value, dig out and exploit hard evidence demonstrating that security awareness activities are actually delivering beneficial cultural changes in the organization and ultimately, of course, saving money by reducing the number and severity of incidents. Demonstrate that your awareness program is adding real value, and that the organization would be much the poorer without it (the straw man approach). Be explicit and specific about the resourcing constraints on what you do and can achieve in order to justify and persuade management to make additional targeted investments, or at least to reprioritize things ...

Aim higher. Justify and push for (further/sufficient) investment in security awareness. Learn from other departments, projects and initiatives about getting support for ... initiatives, projects and departments. Develop a coherent, sensible strategy that pushes security awareness and training as a way to support and enable the business (please, not security for security's sake; by all means make awareness an integral part of your overall information or cyber security program but make darned sure that is business-driven) and sell it. Garner management's support through a well-constructed business case and plenty of one-on-one time informing, refining, persuading and motivating management. Most of all make it work. Meeting/exceeding management's expecations is important to your credibility, if you expect future budget requests and proposals to be well received.

PS Here's a free bonus tip. I mentioned 'scribbling notes to myself'. My main notebook is a virtual scratchpad, nothing fancy, just a plain text file linked from my desktop. I quickly jot down bright ideas that I come across or come up with so that I can contemplate, develop, combine and use them later on when I have more time. I also use this blog as a place to document, develop and share my thoughts. Priceless!

Nov 12, 2015

I wonder if any far-sighted organizations are using a database/systems approach to their metrics? Seems to me a logical approach given that there are lots of measurement data swilling around the average corporation (including but not only those relating to information risk, security, control, governance, compliance and privacy). Why not systematically import the data into a metrics database system for automated analysis and presentation purposes? Capture the data once, manage it responsibly, use it repeatedly, and milk the maximum value from it, right?

If you think that's a naive, impracticable or otherwise krazy approach, please put me straight. What am I missing? Why is it that I never seem to hear about metrics databases, other than generic metrics catalogs (which are of limited value IMNSHO) and Management Information Systems (which were all the rage in the 80s but strangely disappeared from sight in the 90s)?

Conversely, if your organization has a metrics database system, how is it working out in practice? What can you share with us about the pros and cons?

Look what just fell into my inbox. Legit, crude spear-phish or just plain nuts?

I already own ISO27001security.com which is presumably why they think I might be interested in iso27k.com (I'm not!), but this is such an obvious con, I'd have to be a complete mindless idiot to fall for it.

[I've crudely redacted their URL: please don't try to reconstruct and visit it unless you actually want your system to be compromised - and don't blame me!]

Hot topic

NBlogger is ...

Dr Gary Hinson PhD MBA CISSP has an abiding interest in human factors - the ‘people side’ as opposed to the purely technical aspects of information security. Gary's career stretches back to the mid-1980s as both practitioner and manager in the fields of IT system and network administration, information security and IT auditing. He has worked and consulted in the pharmaceuticals/life sciences, utilities, IT, engineering, defense, financial services and government sectors, for organizations of all sizes. Since 2003, he has been creating security awareness materials for clients (www.NoticeBored.com) and supporting users of the ISO27k standards (www.ISO27001security.com). In conjunction with Krag Brotby, he wrote "PRAGMATIC security metrics" (www.SecurityMetametrics.com). He is a keen radio amateur, often calling but seldom heard by distant stations on the HF bands.