Welcome to NBlog, the NoticeBored blog

Sep 13, 2019

ISO/IEC 27001 concerns at least* two distinct classes of risk - ISMS risks and information risks** - causing confusion. With hindsight, the ISO/IEC JTC1 mandate to require a main-body section ambiguously titled "Risks and opportunities" in all the certifiable management system standards was partly to blame for the confusion, although the underlying issue pre-dates that decision: you could say the decision forced the U-boat to the surface.

That is certainly not the only issue with '27001.Confusion around the committee's and the
standard's true intent with respect to Annex A remains to this day: some committee
members, users and auditors believe Annex A is a definitive if minimalist list
of infosec controls, hence the requirement to justify Annex A exclusions ... rather than justify Annex
A inclusions.It is strongly implied that Annex A is the
default set.In the absence of documented
and reasonable statements to the contrary, the Annex A controls are presumed appropriate and necessary ...
but the standard’s wording is quite ambiguous, both in the main body clauses
and in Annex A itself.

In ISO-speak, the use of ‘shall’ in "Normative" Annex
A indicates mandatory requirements; also, main body clause 6.1.3(c) refers to “necessary
controls” in Annex A – is that ‘necessary for the organization to mitigate its information
risks’ or ‘necessary for compliance with this standard and hence certification’?

Another issue with '27001 concerns policies: policies are
mandated in the main body and recommended
in Annex A.I believe the main body is
referring to policies concerning the ISMS itself (e.g. a high-level policy - or perhaps a strategy - stating
that the organization needs an ISMS for business reasons) whereas Annex A concerns
lower-level information security-related policies … but again the wording is somewhat
ambiguous, hence interpretations vary (and yes, mine may well be wrong!).There are other issues and ambiguities within
ISO27k, and more broadly within the field of information risk and security management.

Way down in the weeds of Annex A, “asset register” is an ambiguous
term comprised of two ambiguous words.Having
tied itself in knots over the meaning of “information asset” for some years, the
committee eventually reached a truce by replacing the definition of “information
asset” with a curious and unhelpful definition of “asset”: the dictionary does
a far better job of it! In this context, "register" is generally understood to mean some sort of list or database ... but what are the fields and how much granularity is appropriate? Annex A doesn't specify.

But wait, there’s more!The issues extend beyond '27001. The '27006 and '27007 standards are (I think!) intended to distinguish formal compliance
audits for certification purposes from audits and reviews of the organization’s
information security arrangements for information risk management purposes.Aside from the same issue about the mandatory/optional
status of Annex A, there are further ambiguities tucked away in the wording of
those standards, not helped by some committee members’ use of the term “technical”
to refer to information security controls, leading some top open the massive
can-o-worms labelled “cyber”!

Having said all that, we are where we are.The ISO27k standards are published, warts and all.The committee is doing its best both to address
such ambiguities and to maintain the standards as up-to-date as possible, given
the practical constraints of reaching consensus among a fairly diverse global membership
using ISO’s regimented and formal processes, and the ongoing evolution of this
field.Those ambiguities can be treated
as opportunities for both users and auditors to make the best of the standards in
various contexts, and in my experience rational negotiation (a ‘full and frank
discussion’) will normally resolve any differences of opinion between them.I’d like to think everyone is ultimately aligned on
reaching the best possible outcome for the organization, meaning an ISMS that
fulfills various business objectives relating to the systematic management of
information risks.

* I say ‘at least’ because
a typical ISMS touches on other classes of risk too (e.g. compliance risks,
business continuity risks, project/programme management risks, privacy risks,
health and safety risks, plus general commercial/business risks), depending on
how precisely it is scoped and how those risk classes are defined/understood.

** I’ve been bleating on for
years about replacing the term “information security risk”, as currently used but
not defined as such in the ISO27k standards, with the simpler and more accurate “information risk”.To me, that would be a small but significant
change of emphasis, reminding all concerned that what we are trying to protect - the asset - is, of course, information.I’m
delighted to see more people using “information risk”.One day, maybe we’ll convince SC27 to go the
same way!

Sep 12, 2019

This week, I'm thinking about management activities throughout the metrics lifecycle.

Most metrics have a finite lifetime. They are conceived, used, hopefully reviewed and maybe changed, and eventually dropped or replaced by something better.

Presumably weak/bad metrics don't live as long as strong/good ones - at least that's a testable hypothesis provided we have a way to measure and compare the quality of different metrics (oh look, here's one!).

Ideally every stage of a metric's existence is proactively managed i.e.:

New metrics should arise through a systematic, structured process involving analysis, elaboration and creative thinking on how to satisfy a defined measurement need: that comes first. Often, though, the process is more mysterious. Someone somehow decides that a particular metric will be somewhat useful for an unstated, ill-defined and barely understood purpose;

Potential metrics should be evaluated, refined, and perhaps piloted before going ahead with their implementation. There are often many different ways to measure something, with loads of variations in how they are analyzed and presented, hence it takes time and effort to rationalize metrics down to a workable shortlist leading to final selection. This step should take into account the way that new or changed metrics will complement and support or replace others, taking a 'measurement system' view. Usually, however, this step is either skipped entirely or superficial. In my jaundiced opinion, this is the second most egregious failure in metrics management, after the previous lack of specification;

Various automated and manual measurement activities operate routinely during the working life of a metric. These ought to be specified, designed, documented, monitored, controlled and directed (in other words managed) in the conventional manner but rarely are. No big deal in the case of run-of-the-mill metrics which are simple, self-evident and of little consequence, but potentially a major issue (an information risk, no less) for "key" metrics supporting vital decisions with significant implications for the organization;

The value of a metric should be monitored and periodically reviewed and evaluated in terms of its utility, cost-effectiveness etc. That in turn may lead to adjustments, perhaps fine-tuning the metric or else a more substantial change such as supplementing or dropping it. More often (in my experience) nobody takes much interest in a metric until/unless something patently fails. I have yet to come across any organization undertaking 'preventive maintenance' on its information risk and security metrics, or for that matter any metrics whatsoever - at least, not explicitly and openly.

If a metric is to be dropped (retired, stopped), that decision should be made by relevant management (the metric's owner/s especially), taking account of the effect on management information and any decision-making that previously relied upon it ... which implies knowing what those effects are likely to be. In practice, many metrics circulate without anyone being clear about who owns or uses them, how and what for. It's a mess.

Come on, this is hardly rocket surgery. Information risk and security metrics are relatively recent additions to the metrics portfolio so it's not even a novel issue, and yet I feel like I'm breaking new ground here. Oh oh.

I should probably research fields such as finance and engineering with mature metrics, for clues about good metrics management practices that may be valuable for the information risk and security field.

Sep 11, 2019

Since ISO27k is [information]
risk-driven, poor quality risk management is a practical as well as a
theoretical problem.

In practical terms,
misunderstanding the nature of [information] risk, particularly the ‘vulnerability’
aspect, leads to errors and omissions in the identification, analysis and hence
treatment of [information] risks. The most common issue I see is people
equating ‘lack of a control’ with ‘vulnerability’. To me, the
presence or absence of a control is quite distinct from the vulnerability, in
that vulnerability is an inherent weakness or flaw in something (e.g. an
IT system, an app, a process, a relationship, contract or whatever. Even
a control has vulnerabilities, yet we tend to forget or discount or simply
ignore the fact that controls aren’t perfect: they can and do fail in practice,
with several information risk management implications). Think about it:
when was the last time you seriously considered the possibility that a control
might fail? Did you identify, evaluate and treat that secondary risk, in
a systematic and formal manner … or did you simply get on with things
informally? Have you ever done a risk analysis on your “key controls”? Do you actually know which of your organization’s controls are “key”, and
why? That's a bigger ask than you may think. Try it and you'll soon find out, especially if you ask your colleagues for their inputs.

In theoretical terms, risk is
all about possibilities and uncertainties i.e.probability. Using
simplified models with defined values, it may be technically possible to
calculate a precise probability for a given situation under laboratory
conditions, but that doesn’t work so well in the real world which is more
complex and variable, involving factors that are partially unknown and
uncontrolled. We have the capability to model groups of events,
populations of threat actors, types of incident etc. but accurately predicting
specific events and individual items is much harder, verging on impossible in
practice. So even extremely careful, painstaking risk analysis still
doesn’t generate absolute certainty. It reduces the problem space to a
smaller area (which is good!), but not to a pinpoint dot (such precision that
we would know what we are dealing with, hence we can do precisely the
right things). What’s more, ‘extremely careful’ and ‘painstaking’ implies
slow and costly, hence the approach is generally infeasible for the kinds of
real-world situations that concern us. Our risk management resources are
finite, while the problem space is large and unbounded. The sky is awash
with risk clouds, and they are all moving!

Complicating things still
further, we are generally talking about ‘systems’ involving human beings
(individuals and organizations, teams, gangs, cabals and so on), not [just]
robots and deterministic machines. Worse, some of the humans are actively
looking to find and exploit vulnerabilities, to break or bypass our lovely
controls, to increase rather than decrease our risk. The real-world
environment or situation within which information risks exist is not just
inherently uncertain but, in part, hostile.

So, in the face of all that
complexity, there is obviously a desire/need to simplify things, to take short
cuts, to make assumptions and guesses, to do the best we can with the
information, time, tools and other resources at our disposal. We are
forced to deal with priorities and pressures, some self-imposed and some
imposed upon us. ISO27k attempts to deal with that by offering ‘good
practices’ and ‘suggested controls’. One of the ‘good practices’ is to
identify, evaluate and treat [information] risks systematically within
the real-world context of an organization that has business objectives,
priorities and constraints. We do the best we can, measure how well we’re
doing, and seek to improve over time.

At the same time, despite the
flaws, I believe risk management is better than specified lists of
controls. The idea of a [cut down] list of information security controls
for SMEs is not new e.g. “key controls” were specifically identified with
little key icons in the initial version of BS7799 I think, or possibly the code
of practice that preceded it. That approach was soon dropped because what
is key to one organization may not be key to another, so instead today’s ISO27k
standards promote the idea of each organization managing its own [information]
risks. The same concerns apply to other lists of ‘recommended’
controls such as those produced by CIS, SANS, CSA and others, plus those
required by PCI-DSS, privacy laws and other laws, regs and rulesets including
various contracts and agreements. They are all (including ISO27k)
well-meaning but inherently flawed. Better than nothing, but
imperfect. Good practice, not best practice.

The difference is that ISO27k
provides a sound governance framework to address the flaws systematically. It’s context-dependent, an adaptive rather than fixed model. I value that flexibility.

Sep 6, 2019

I've swapped a couple of emails this week with a colleague concerning the principles and axioms behind information risk and security, including the infamous CIA triad.

According to some, information security is all about ensuring the Confidentiality, Integrity and Availability of information ... but for others, CIA is not enough, too simplistic maybe.

If we ensure the CIAof information, does thatmean it is secure?

Towards the end of the last century, Donn Parker proposed a hexad, extending the CIA triad with three (or is it four?) further concepts, namely:

Possession or control;

Authenticity; and

Utility.

An example illustrating Donn's 'possession or control' concept/s would be a policeman seizing someone's computer device intending to search it for forensic evidence, then finding that the data are strongly encrypted. The police physically possess the data but, without the decryption key, are denied access to the information. So far, that's simply a case of the owner using encryption to prevent access and so prevent availability of the information to the police, thereby keeping it confidential. However, the police might yet succeed in guessing or brute-forcing the key, or exploiting a vulnerability in the encryption system (a technical integrity failure), hence the owner is currently less assured of its confidentiality than if the police did not possess the device. Assurance is another aspect of integrity.

Another example concerns intellectual property: although I own and have full access to a physical book, I do not generally have full rights over the information printed within. I possess the physical expression, the storage medium, but don't have full control over the intangible intellectual property. The information is not confidential, but its availability is limited by legal and ethical controls, which I uphold because I have strong personal integrity. QED

Personally, I feel that Donn's 'authenticity' is simply an integrity property. It is one of many terms I've listed below. If something is authentic, it is true, genuine, trustworthy and not a fake or counterfeit. It can be assuredly linked to its source. These aspects all relate directly to integrity.

Similarly, Donn's 'utility' property is so close as to be practically indistinguishable from availability. In the evidence seizure example, the police currently possess the encrypted data but lacking the key or the tools and ability to decrypt it, the information remains unavailable. There are differences between the data physically stored on the storage medium and the intangible information content, sure, but I don't consider 'utility' a distinct or even useful property.

Overall, the Parkerian hexad is an interesting perspective, a worthwhile challenge that doesn't quite make the grade, for me. That it takes very specific, carefully-worded, somewhat unrealistic scenarios to illustrate and explain the 3 additional concepts, scenarios that can be readily rephrased in CIA terms, implies that the original triad is adequate. Sorry Donn, no cigar.

In its definition of information security, ISO/IEC 27000 lays out the CIA triad then notes that "In addition, other properties, such as authenticity, accountability, non-repudiation, and reliability can also be involved". As far as I'm concerned, authenticity, accountability and non-repudiation are all straightforward integrity issues (e.g. repudiation breaks the integrity of a contract, agreement, transaction, obligation or commitment), while reliability is a mix of availability and integrity. So there's no need to mention them, or imply that they are somehow more special than all the other concepts that could have been called out but aren't even mentioned ....

Integrity is a fascinatingly rich and complex concept, given that it has a bearing on aspects such as:

Reputation, image and credibility - very important and valuable in the case of brands, for instance.

Confidentiality is pretty straightforward, although sometimes confused with privacy. Privacy partially overlaps confidentiality but goes further into aspects such as modesty and personal choice, such as a person's right to control disclosure and use of information about themselves.

Availability is another straightforward term with an interesting wrinkle. Securing information is as much about ensuring the continued availability of information for legitimate purposes as it is about restricting or preventing its availability to others. It's all too easy to over-do the security controls, locking down information so far that it is no longer accessible and exploitable for authorized and appropriate uses, thereby devaluing it. Naive, misguided attempts to eliminate information risk tend to end up in this sorry state. "Congratulations! You have secured my information so strongly that it's now useless. What a pointless exercise! Clear your desk: you're fired!"

Summing up, the CIA triad is a very simple and elegant expression of a rather complex and diffuse cloud of related issues and aspects. It has stood the test of time. It remains relevant and useful today. I commend it to the house.

Sep 5, 2019

This week I've been contemplating the right to repair movement, promoting the idea that consumers and third parties (such as repair shops) should not be legally denied the right to meddle with the stuff they have bought - to diagnose, repair and update it - without being forced to go back to the original manufacturer (a monopolistic constraint) or throw it away and buy a replacement.

Along similar lines, I am leaning towards the idea that products generally ought to be repairable and modifiable rather than disposable. That is, they should be designed with ‘repairability’ as a requirement, as well as safety, functionality, standards compliance, value, reliability and what have you. I appreciate that miniaturization, surface mounting, multi-layer PCBs, flow soldering and robotic parts placement make modern day electronic gizmos small and cheap as well as tough to repair, but obsolescence shouldn’t be built-in, deliberately, by default. Gizmos can still have test points, self-testing and diagnostics, with replaceable modules, with diagrams, fault-finding instructions and spare parts.

The same consideration applies, by the way, to proprietary software and firmware, not just hardware. Clearly documented source code, with debugging facilities, 'instrumentation' and so on, should be available for legitimate purposes - checking and updating the information security aspects for instance.

On the other hand, there are valuable Intellectual Property Rights to protect, and in some cases 'security by obscurity' is a valid though fragile control.

Perhaps it's appropriate that monopolistic companies churning out disposable, over-priced products to a captive market should consider their intellectual property equally disposable. Perhaps not. Actually I think not because I believe the concept of IPR as a whole trumps the greed of certain tech companies.

The real problem with IPR, as I see it, is China, or more specifically the Chinese government ... and I guess the Chinese have a vested interest in disposability. So that's a dead end then.

Sep 4, 2019

Among other things, the awareness seminars in September's NoticeBored module on hacking make the point that black hats are cunning, competent and determined adversaries for the white hats. In risk terms, hacking-related threats, vulnerabilities and impacts are numerous and (in some cases) substantial - a distinctly challenging combination. As if that's not enough, security controls can only reduce rather than completely eliminate the risk, so despite our best efforts, there's an element of inevitability about suffering harmful hacking-related incidents. It's not a matter of 'if' but 'when'.

All very depressing.

However, all is not lost. For starters, mitigation is not the only viable risk treatment option: some hacking-related risks can be avoided, while insurance plus incident and business continuity management can reduce the chances of things spiraling out of control and becoming critical, in some cases literally fatal.

Another approach is not just to be good at identifying and responding effectively to incidents, but to appear strong and responsive. So, if assorted alarms are properly configured and set, black hat activities that ought to trigger them should elicit timely and appropriate responses ... oh but hang on a second. The obvious, direct response is not necessarily appropriate or the best choice: it depends (= is contingent) on circumstances, implying another level of information security maturity.

'Intelligent response' is a difficult area to research since those practicing it are unlikely to disclose all the details, for obvious reasons. We catch little glimpses of it in action from time to time, such as bank fraud systems blocking 'suspicious' transactions in real time (impressive stuff, given the size and number of the haystacks in which they are hunting down needles!). We've all had trouble convincing various automated catchpas that we are, in fact, human: there the obvious response is the requirement to take another test, but what else is going on behind the scenes at that point? Are we suddenly being watched and checked more carefully than normal? Can we expect an insistent knock at the door any moment?

In the spirit of the quotation seen on the poster thumbnail above, I'm hinting at deliberately playing on the black hats' natural paranoia. They know they are doing wrong, and (to some extent) fear being caught in the act, all the more so in the case of serious incidents, the ones that we find hardest to guard against. Black hats face information risks too, some of which are definitely exploitable - otherwise, they would never end up being prosecuted or even blown to smithereens. That means they have to be cautious and alert, so a well-timed warning might be all it takes to stop them in their tracks, perhaps sending them to a softer target.

Network intrusion detection and prevention systems are another example of this kind of control. Way back when I was a nipper, crude first-generation firewalls simply blocked or dropped malicious network packets. Soon after, stateful firewalls came along that were able to track linked sequences of packets, dealing with fragmented packets, sequence-out-of packets and so on. Things have moved on a long way in the intervening decades so I wonder just how sophisticated and effective today's artificial intelligence-based network and system security systems really are, in practice, for those who can afford them anyway. Do they have 'unpredictability' options with 'randomness' or 'paranoia' settings?

Sep 3, 2019

ISO/IEC 27001:2013 section 5.2 is normally interpreted as the top layer of the ‘policy pyramid’.

As with all the main body text in ‘27001, the wording of clause 5.2 is largely determined by:

(a) ISO/IEC JTC1 insisting on commonality between all the management systems standards, hence you’ll find much the same mandated wording in ISO 9000 and the others; and

(b) the need to spell out reasonably explicit, unambiguous ‘requirements’ against which certification auditors can objectively assess compliance.

Personally, when reading and interpreting clause 5.2, I have in mind something closer to “strategy” than what information security pro's would normally call “policy” - in other words a visionary grand plan for information risk and security that aligns with, supports and enables the achievement of the organization’s overall business objectives. That business drive is crucial and yet is too often overlooked by those implementing Information Security Management Systems, partly because '27001 doesn't really explain it. The phrase "internal and external context" is not exactly crystal clear ... but that's what the JTC1 directive demands.

Principle 1.Our Information Security Management System conforms to generally accepted good security practices as described in the ISO/IEC 27000-series information security standards.

Principle 2.Information is a valuable business asset that must be protected against inappropriate activities or harm, yet exploited appropriately for the benefit of the organization. This includes our own information and that made available to us or placed in our care by third parties.

and

Axiom 1: This policy establishes a comprehensive approach to managing information security risks. Its purpose is to communicate management’s position on the protection of information assets and to promote the consistent application of appropriate information security controls throughout the organization. [A.5.1]

Axiom 2: An Information Security Management System is necessary to direct, monitor and control the implementation, operation and management of information security as a whole within the organization, in accordance with the policies and other requirements. [A.6.1]

As you might have guessed from those [A. …] references, the axioms are based on the controls in Annex A of ‘27001.We have simply rephrased the control objectives in ‘27002 to suit the style of a corporate policy, such that the policy is strongly linked to and aligned with ISO27k. Those reading and implementing the policy are encouraged to refer to the ISO27k standards for further details and explanation if needed.

There is a downside to this approach however since there are 35 axioms to lay out, making the whole generic policy 5½ pages long. I'd be happier with half that length. Customers may not need all 35 axioms and might review and maybe reword, revise and combine them, hopefully without adding yet more. That's something I plan to have a go at when the generic policy is next revised.

The principles take things up closer to strategy. This could be seen as a governance layer, hence our first principle concerns structuring the ISMS around ISO27k. It could equally have referred to NIST's Cyber Security Framework, COBIT, BMIS or whatever: the point is to make use of one or more generally accepted standards, adapting them to suit the organization's needs rather than reinventing the wheel.

I find the concept of information risk and security principles fascinating. There are in fact several different sets of principles Out There, often incomplete and imprecisely stated, sometimes only vaguely implied.Different authors take different perspectives to emphasize different aspects, hence it was an interesting exercise to find and elaborate on a succinct, coherent, comprehensive set of generally-applicable principles. I'm pleased to have settled on just 7 principles, and these too will be reviewed at some point, partly because the field is moving on.

"A set of policies for information security shall be defined, approved by management, published and communicated to employees and relevant external parties."

ISO/IEC 27002 section 5 expands on that succinct guidance with more than a page of advice. ISO/IEC 27003 is not terribly helpful in respect of the topic-specific policies but does a reasonable job of explaining how the high level/corporate security policy aligns with business objectives.

Aug 30, 2019

We've just completed and delivered September's NoticeBored security awareness and training module about hackers - a topic we haven’t covered specifically for a few years, although most of the NoticeBored modules at least touch on hacking – some more than others,The hacking risks have changed perceptibly in that time. The rise of state-sponsored (spooky!) hacking is of great concern to those of us who care about critical national infrastructures, human society and world peace. The United Nations is due to meet in a couple of weeks to discuss the possibility of reaching agreement on the rules of cyberwarfare, mirroring those for conventional, nuclear and biological warfare. Let’s hope they manage to align the ~200 countries represented at the UN – a tough task for the diplomats, politicians and cyberwar experts. That aspect gives a distinctly sinister tinge to the awareness module, and yet I hope we’ve succeeded in keeping the materials reasonably light, interesting and engaging as ever, a delicate balance.

Bug bounties merit a mention this time around as an innovative way to get hackers on-side that seems to be paying off for some organizations. Of course, not all hackers will be enticed by the filthy lucre but those who are help the organizations address vulnerabilities that otherwise might have been exploited. Reducing information risks and earning legitimate income has to be A Good Thing, right?

All three streams emphasize the need for detective and corrective controls, supplementing the preventive controls because they are fallible.

The sheer variety of risks and controls is overwhelming, so we'll pick out a few topical aspects to discuss, such as using bug bounties as a technique to both encourage (ethical) disclosure and improve information security, a nice combination.

Hardware hacking will make an appearance too. Over the weekend I've been reading about a hobbyist reconstructing a DEC PDP/11 using modern programmable chips to replicate the original, and last month I was fascinated by a project to rebuild the lunar lander guidance system - not a replica but an original test system. Amazing stuff!

Aug 25, 2019

Yesterday I promised to share some ideas for looping intros on your PowerPoint presentations, primarily but not exclusively for security awareness seminars and the like. Rather than wasting the time between opening the door and starting the session, it's a mini awareness opportunity you can exploit.Here are 20 ways to use your loopy intros:

Show short security awareness videos, maybe ‘talking
heads’ clips of people talking about current threats, recent incidents, new policies
etc.;

Quotes from attendees at past awareness events, possibly
again as video or audio clips or written quotations in their own words;

A slide-show of still photos from previous awareness and training events,
preferably showing people having a good time and enjoying a laugh;

Slide 1 is the original title slide - conventional, plain and frankly very boring. Normally, that slide would remain static on the screen ... but now as if by magic stuff happens ...

Slides 2 through 6 sequentially modify the title, using red animated scribbles and a cursive/handwritten font, as if someone behind the scenes was figuring out something a bit more interesting to say, modifying the title slide in real time. You can see the slide timing under the thumbnails above, although the animations add a few seconds on each slide so the whole sequence takes nearly a minute;

Although you can't see it, slide 7 slowly fades out the red stuff, then the sequence returns seamlessly to slide 1, repeating slides 1 to 7 indefinitely as people arrive in the room and settle down;

When everybody is in place and the seminar is ready to start, the presenter simply clicks to drop out of the loopy sequence at any point, launching the main part of the presentation, a normal PowerPoint slide deck.

It works, the concept is proven but it's not exactly an enthralling sequence. I'm thinking up ways to jazz it up, and will share some creative ideas here on the blog tomorrow.

Aug 23, 2019

Don't let metrics undermine your business by Harris and Taylor is a thought-provoking piece in the wonderful Harvard Business Review.It concerns a tough old problem, that of metrics themselves becoming the focus of attention within the organization rather than the subjects of measurement and, more importantly still, the business strategies the metrics are intended to support or enable.

"Every day, across almost every organization, strategy is being hijacked by numbers ... It turns out that the tendency to mentally replace strategy with metrics — called surrogation — is quite pervasive. And it can destroy company value."

According to Wikipedia, Charles Goodheart advanced the idea in 1975, although I suspect people have been manipulating metrics and duping each other pretty much since the dawn of measurement. My eyes were opened to the issue by Hauser and Katz in Metrics: you are what you measure! Krag Brotby and I wrote about that in PRAGMATIC Security Metrics. Surrogation is surprisingly common in practice, for example "Thank you for your business. Please give five stars feedback after this transaction" is vaguely coercive, more so when appended with something along the lines of "My bonus depends on high scores" or "Visit our Facebook page to enter our prize draw".

Government officials and politicians do it all the time - it's almost a job requirement to know how to appear to be doing good things for the nation, regardless of the reality.

VW was famously caught doing it by having their engine management systems detect the conditions indicating that emissions testing was being performed, enabling the emission controls to ace those tests then disabling them to improve other aspects of performance (such as fuel economy) after the emissions tests were done. Sneaky - and a risky strategy as VW discovered to its cost and shame. I would be astonished to discover that VW was the only, or indeed the worst culprit though.

If a process or system is measured by a metric, and if the metric governs bonuses or other benefits for those performing the process, then they have an incentive to optimize the process/system and/or the metric: both routes lead to reward. Creative thinkers can often find ways to drive up apparent performance without necessarily improvingactual performance, and if the bonuses or benefits are substantial, the pressure to do so can be strong.

One way to optimize a metric is to manipulate the measurement process, for example selectively discounting, blocking or simply ignoring bad values, creating a bias such that the metric no longer truly represents the process being measured - an integrity failure. Comparative metrics such as benchmarks can be optimized by decreasing the actual or apparent (measured) performance of peers or other comparators: that may not align with business objectives and would generally be considered unethical. Subjective metrics can be manipulated by coercion of the people doing the measurement, at any stage of the process (data collection, analysis, reporting/presentation and consumption ... perhaps even way back at the metrics specification and design phase, or during 'refinements' of an existing metric).The same thing applies, by the way, if those 'bonuses or benefits for good performance' are in fact penalties or disincentives for poor performance. Manipulating the measurement activities to conceal actual performance issues may be easier than addressing underlying problems in whatever is being measured, especially if the measurement aspects are poorly designed and lack adequate controls ...

The risk of someone gaming, subverting or hacking the measurement processes and systems is, of course, an information risk, one that ought to be identified, evaluated and treated just like any other. The classical risk management approach involves:

Considering the probability of occurrence (threats exploiting vulnerabilities) and the impacts or consequences of incidents with an obvious emphasis on critical or key metrics, plus any that lead directly to cash or convertible assets, such as the stock options commonly used as performance incentives for executives;

Deciding what to do about the risks;

Doing it, generally by implementing suitable measurement process controls such as monitoring and managing the processes/systems to pick up on and address any issues in practice, including obvious or more subtle signs of manipulation/gaming/coercion - a step in the risk management process that (in my experience) is woefully neglected when it comes to metrics. Metrics aren't fire-and-forget weapons.

That's enough for today. I'll return to explore the management and other controls around metrics at some future point.

Aug 22, 2019

This morning, "PS" asked the ISO27k Forum for advice about reviewing access rights.

"I just got a minor NC for not showing compliance with review
of user access rights control. At present, a report containing leavers [is] reviewed
by servicedesk to ensure removal of access. This process supplements the leaver
process owned by department managers. But [an] auditor has insisted that we should retrieve all access
reports and review them. So question is how do demonstrate compliance with this
control in your organisation? Appreciate your guidance"

Some respondents duly mentioned typical controls in this area, while some of us spotted an issue with the issue as described. Why did the auditor raise a minor non-compliance? On what basis did the auditor insist that they should ‘retrieve and review all access reports’ - if in fact he/she did?

With a little creative/lateral thinking, it turns out there are several intriguing possibilities in the situation described by PS aside from the obvious:

The
organization had instituted and mandated a formal policy stating that ‘All
access reports will be reviewed’ – a bad move unless they truly expected precisely that to happen. They are committed to
doing whatever their policy says. If they don’t do so, it is a valid
noncompliance finding;

The
organization had [perhaps unwisely or inadvertently] instituted a formal
policy stating something vaguely similar to ‘all access reports will be
reviewed’, which the auditor interpreted to mean just that, whether
correctly or incorrectly. This is always a possibility if policies
are poorly/vaguely worded, or if the supporting procedures, guidelines, help text,
advisories, course notes, management instructions etc. are similarly worded
or simply missing (leaving it to workers to interpret things as they see
fit … which may not be the same as the auditors, or management, or lawyers
and judges if incidents escalate);

The
organization had a procedure or guideline stating [something similar to]
‘all access reports will be reviewed’, in support of a formal policy on
information access or whatever, and again the auditor was right to raise
an issue;

The
organization had a policy or whatever outside the information
security arena (e.g. tucked away in an IT or HR policy, procedure, work
instruction etc.) stating that ‘All access reports will be reviewed’ ... which in turn begs a bunch of questions about the scope of the Information Security Management System and the audit, plus the organization's policy management practices;

An old,
deprecated, withdrawn, draft or proposed policy had the words ‘all access
reports will be reviewed’, and somehow the auditor got hold of it and (due
to flaws in the organization’s policy controls) believed it might be, or
could not exclude the possibility that it was, current, valid and
applicable in this situation - another valid finding;

A
stakeholder such as a manager verbally informed the auditor that it was
his/her belief or wish that ‘All access reports must be reviewed’,
inventing policy on the spot. This kind of thing is more likely to happen if the actual policy is unclear or unwritten, or if individual workers don't know about and understand it. It could also have been a simple error by the manager, or a misunderstanding by the auditor ... which possibility emphasizes the value of audit evidence and the process of systematically reviewing and confirming anything that ends up in the audit report (plus potentially reportable issues that are not, in fact, reported for various reasons);

The
organization had formally stated that some or all of the controls
summarized in section A.9 of ISO/IEC 27001:2013 were applicable without
clarifying the details, which the auditor further [mis?]interpreted to
mean that they were committed to ‘retrieve and review all access reports’;

For
some reason, the auditor asserted that the organization ought to be
‘retrieving and reviewing all access reports’ without any formal basis in
fact: he/she [perhaps unintentionally] imagined or misinterpreted a
compliance obligation and hence inaccurately identified non-compliance
when none exists;

The
auditor may have sniffed out a genuine information risk, using the minor
non-compliance as a mechanism to raise it with management in the hope of
getting it addressed, whether by achieving compliance or by amending the
control;

The auditor may have made the whole thing up, perhaps confusing matters that he/she didn't understand, or under pressure to generate findings in order to justify his/her existence and charges;

The
auditor simply had a bad day and made a mistake (yes, even auditors are human beings!);

PS had a bad day e.g. the minor non-compliance was not actually
reported as stated in his question to the forum, but was [mis]interpreted
as such. Perhaps someone spuriously injected the word “all” into the
finding (Chinese whispers?);

PS wasn't actually posing a genuine question, but invented the scenario to fish for more information on the way forum members tackle this issue, or was hoping for answers to a homework assignment;

The
auditor was trying it on: was this a competent, experienced, qualified,
independent, accredited compliance auditor, in fact? Was it someone
pretending/claiming to be such - someone in a suit with an assertive manner maybe? Was it just someone with “auditor”
scribbled on their business card? Was it a social engineer or fraudster at play?!;

It
wasn’t a minor non-compliance, after all. Maybe I have misinterpreted “NC”
in the original forum question;

etc. ...

... Compiling and discussing
lists like this makes an excellent exercise in awareness sessions or courses – including auditor training by the way. In this
particular case, the sheer variety of possibilities is a warning for
information security and other professionals re policies, compliance, auditing
etc. In practice, “policy” is a more nebulous, tricky, important and
far-reaching concept than implied by the typical dictionary definition of the
word. Just consider the myriad implications of "government policy" or speak to a tame lawyer for a glimpse into the complexities.

Aug 21, 2019

Sadly, the time has come to draw a lengthy chapter in our lives to a close.

The monthly NoticeBored security awareness and training subscription service will cease to be early next year. As of April 2020, NoticeBored will be no more. It will be pushing up the daisies. We'll be nailing it to the perch and sending it off to the choir invisible.

The final straw and inspiration for the title of this piece was yet another exasperating phisher:

... and the realisation that suckers will inevitably fall for scams as ridiculous as that, no matter what we do. There will always be victims in this world. Some people are simply beyond help ... and so too, it seems, are organizations that evidently don't understand how much they need security awareness and training. "It's OK, we have technology" they say, or "Our IT people run a seminar once a year!" and sure enough the results are plain for all to see. Don't say we didn't warn them.

We tried, believe me we tried to establish a viable market for top-quality professionally written creative awareness and training content. Along the way we've had the pleasure of helping our fabulous customers deliver world-class programs with minimal cost and effort. But in the end we were exhausted by the overwhelming apathy of the majority.

As we begin the research for our 200th security awareness module, it's time to move on, refocusing our resources and energies on more productive areas - consulting and auditing on information risk and security, ISO27k, security metrics and suchlike.

We're determined that the gigabytes of creative security awareness and training content we've created since 2003 will not end up on some virtual landfill so we'll continue to offer and occasionally update the security policies and other materials through SecAware.com. The regular monthly updates will have to go though as there simply aren't enough hours in the day. "She cannae take it, Cap'n!"Meanwhile these bloggings will continue. We're still just as passionate as ever about this stuff (including the value of security awareness, despite everything). We've got articles, books and courses to write and deliver, standards to contribute to, global user communities to support, proposals to prepare. Must go, things to do.PS Oh look, here's another, an inept spoof on Apple Pay:

Hot topic

NBlogger is ...

Dr Gary Hinson PhD MBA CISSP has an abiding interest in human factors - the ‘people side’ as opposed to the purely technical aspects of information security. Gary's career stretches back to the mid-1980s as both practitioner and manager in the fields of IT system and network administration, information security and IT auditing. He has worked and consulted in the pharmaceuticals/life sciences, utilities, IT, engineering, defense, financial services and government sectors, for organizations of all sizes. Since 2003, he has been creating security awareness materials for clients (www.NoticeBored.com) and supporting users of the ISO27k standards (www.ISO27001security.com). In conjunction with Krag Brotby, he wrote "PRAGMATIC security metrics" (www.SecurityMetametrics.com). He is a keen radio amateur, often calling but seldom heard by distant stations on the HF bands.