Welcome to NBlog, the NoticeBored blog

Aug 30, 2018

We're close to completing the NoticeBored 'outsider threats' security awareness module for September, checking and finalizing the materials. Things are getting tense as the IsecT office clock ticks away the remaining hours.

Normally, we develop awareness briefings for each of the three audience groups from the corresponding three awareness seminar slide decks, using the graphics and notes as donor/starter content and often following a similar structure.

Having finished the staff seminar this morning, I anticipated using that as the basis for a staff briefing as usual ... but, on reflection, I realized that we have more than enough content to prepare a lengthier A-to-Z guide to outsider threats instead.

The sheer number and variety of outsider threats and incidents is itself a strong awareness message. Listing and (briefly) describing them in an alphabetical sequence makes sense.

This will be an interesting read for awareness and training purposes and, I believe, a useful reference document - essentially a 'threat catalog' to help identify and assess the information risks relating to outsiders and other external threats.

If your current list of outsider threats and risks has only a handful of entries, you should expect to be caught out by any of the dozens you have failed to consider.

Preparing it sounds great in theory but potentially it's too much work for the little time remaining ... except that I had the foresight to prepare a Word template for the A-to-Z guides from the last one we prepared. Now 'all I have to do' is paste in lists of threats and incidents already written in other awareness materials, click the magic button to sort them alphabetically, apply the Word styles to make the whole product look presentable then check it through for consistency. OK so there's a bit more to it than that but it's coming along rapidly and will be done in time. Having written about 9 pages so far, I'm taking a break after some 9 hours' intense concentration, resting and hoping not to wake up at 2 or 3 am with a head full of it! It needs about 2 or 3 more hours' work in the morning to complete the remaining 2 or 3 pages (spot the formula!). At least, that's the plan.

When it's all done, maybe we could offer it for sale as a combined awareness/training piece and outsider threat catalog through the SecAware website: what do you think? Is it something that would interest you dear reader? Would you be prepared to invest a few dollars for immediate download? NoticeBored subscribers will receive it as part of their subscription, naturally, but I think it has some potential and value as a standalone product for wider readership.

Failing that, we might just release it as a freebie for marketing purposes, or seek to get it published in one of the trade journals. Or sit on it, updating it from time to time as inspiration strikes. We'll see how it goes.

For now, though, I'm all in and off to bed to recharge my flagging grey matter for the final slog.

Aug 29, 2018

The wide variety of threatening people, organizations and situations Out There, and the even wider variety of outsider incidents, is quite overwhelming ... which means we need to simplify things for awareness purposes. If we try to cover too much at once, we'll confuse, overwhelm and maybe lose our audiences, if not ourselves.

On the other hand, that variety is itself an important lesson from September's awareness module. It's not sufficient, for instance, for the cybersecurity team to lock down the corporate firewall in order to block hackers and malware while neglecting other outsider threats such as intellectual property theft and disinformation. Organizations are in a difficult position, trying to avoid, prevent or limit all manner of outsider incidents, some of which are particularly difficult to even identify let alone control. It's soot-juggling really.

With our start-of-month delivery deadline imminent, we're currently finalizing September's NoticeBored slide decks and briefings, focusing on the key messages and making sure they have enough impact to resonate with the awareness audiences - our own version of soot-juggling. We have the advantage of being able to delve into things in more depth later, thanks to the rolling program of awareness topics. Next month, for example, we'll focus on phishing, specifically, so this month we'll take the opportunity to mention phishing as a form of outsider social engineering cyber-attack, briefly, without having to explain all of that just now.

Things always become a bit frantic in the IsecT office as the deadline looms. On the bright side, we've done a stack of prep-work during the month plus research prior to that so we have no shortage of content. And we've been here many times before - every single month for the past 15 years in fact! So, that's it for now. Must dash. Speling to dubble-chek. Shiny things to polish.

Cheaply, considering the entire lifecycle of the controls including their development, use and management;

Practically, pragmatically, feasibly, in reality;

On all appropriate platforms/systems/devices (current, legacy and future) and networks with differing levels of trustworthiness and processing capabilities;

Under all circumstances, including crises or emergencies;

For all relevant people (insiders, outsiders and inbetweenies), regardless of their mental and
physical abilities/capacities, other priorities, concerns, state of health etc., while also failing to authenticate former employees, twins (evil or benign),
fraudsters, haXXors, kids, competitors, crims, spooks, spies, pentesters and
auditors on assignment;

Using currently viable technologies, methods, approaches and processes; and

Without relying on unproven, unverifiable or otherwise dubious technologies.

In short, authenticating people is t

ough, one of those situations where we're squeezing a half-inflated balloon, hoping it won't bulge alarmingly or just pop.

In practice, when designing and configuring authentication subsystems or functions, the key question is what to compromise on, how
much slack can realistically and safely be cut (i.e. reducing various information risks to an acceptable level), and just how far things need to
be pushed (an assurance issue).

In the ongoing hunt for solutions, quite a variety of authentication methods, tools and techniques has been invented and deployed so far:

Distinctive chemicals (smell) and other bodily or behavioural characteristics such as color, mannerisms, gait (very widely used by animals other than humans);

DNA (quite reliable but hardly instantaneous!);

User and/or device location;

Network address, hardware address

Mode/means/route/mechanism of access;

Time of access;

Multifactor authentication using more than one 'factor';

Probably other stuff I've forgotten about;

Some combination of the above.

Depending on how you count them, there are easily more than 20 authentication methods in use today, and yet it is generally agreed that they barely suffice.

Rather than inventing yet another method, I wonder if we need a different paradigm, a better, smarter approach to authentication? Specifically, I'm thinking about the possibility of continuous, ongoing or dynamic authentication rather than episodic authentication.

Instead of forcing us to "log in" at the start of a session, how about simply letting us start doing stuff, rating us as we go and deciding what stuff to let us do according to how authentic we appear to be, and what it is that we want to do? So, returning to my earlier point about having to make compromises, the assurance needed before allowing someone to browse the Web is rather different to that needed to let them bank online - and within online banking, viewing account balances is not equivalent to making a funds transfer between accounts, or a payment to another account, in Switzerland, of the entire balance and credit/overdraft value, at 3:30am, from a smartphone somewhere in Lagos ...

Biometric authentication methods have to allow for natural variation between measurements because living organisms vary, and measurement methods are to some extent imprecise. Taking additional measurements is an obvious way to improve accuracy and precision ... so instead of taking a single fingerprint reading, why not keep on re-reading and checking until there is sufficient data and sufficient statistical confidence? Instead of forcing me to use a password of N-characters, why not check how I type the first few characters to see if the little timing and pressure differences indicate it is probably me, perhaps coupling that with facial recognition and additional checks depending on what it is that I'm doing during the session. If I'm doing something out of character, especially something risky, prevent or slow me down. Instead of timing out and locking me out of the system if I wander away to make a cup of tea, reduce my trustworthiness rating and hence the things I can do when I return. Let me boost my trustworthiness if I really need additional rights 'right now' by inviting me to use some of those slower and more costly authentication mechanisms, or correlating authentication/trustworthiness indicators and scores from several systems (e.g. make it harder for me to access the file server if I have not clocked-in to the building with my staff pass card, bought a coffee without sugar from the vending machine, and polled the local cell tower from my cellphone).

Maybe even turn the problem on its head. Rather than making me prove my claimed identity, disprove it by checking what I'm doing for anomalies and concerns. I'm sure there's huge potential in behavioral analysis - not just the basic biometrics such as typing speed but the specific activities I perform, the sequence, the context and so on - building up a more holistic picture of the person in the chair.

Oh and if the systems are not entirely sure it is me in the chair, why not let me think I am doing stuff while in reality caching my inputs and faking what I see while waiting for me to build up sufficient additional assurance ... or quietly summoning Security.

Aug 22, 2018

As always, it takes me more time and effort to write short, formal pieces such as a new policy than longer run-o-the-mill awareness briefings. The actual writing part is straightforward: knowing what to incorporate and what to leave out requires more thought.

A policy on pentesting presents particular challenges: I think I know what it has to say but what else should it say? How should it be said and what can be safely left unsaid?

The few published pentest policies Google has found me so far all differ in style, naturally, but also vary in purpose and content. Most are quite narrowly focused on specific aspects or types such as the vulnerability scanning performed under PCI-DSS. They have prompted me to consider aspects I might otherwise have neglected but I can improve on them by incorporating good ideas from all sources including my own experience in this area (such as it is!) and security standards into a more coherent and comprehensive whole.

The 'background' section to the policy is important in setting the scene, explaining the rationale for the policy statements that follow. So far, the background talks about the pros and cons of pentesting - its value to the business and its limitations as an assurance technique - in about 200 carefully-chosen words. That is followed by one succinct policy axiom, supported in turn by a few more detailed policy statements expanding on and explaining the key points. Other sections such as roles and responsibilities, and cross-references to related policies complete the model.

It is tempting to concentrate purely on the technology aspects since pentesting obviously revolves around IT but the broader business and risk management aspects are equally relevant for any policy. The technical/cybersecurity controls required within pentesting only make sense in the context of administrative controls around the pentesting process, extending all the way out to the clarification of pentest objectives and parameters, selection of suitable pentesters, contracts and agreements, oversight/monitoring, scheduling, ethics, reporting and, of course, the downstream activities of systematically addressing identified vulnerabilities. Running Nessus, or whatever, is but a small cog in a larger, more complicated mechanism.

Authorization is a key control for pentesting so I'm carefully figuring out what that actually means, how it works and how to express it in terms that should work for almost any organization - which is yet another complication for me. I'm not writing a pentest policy for my own company or a specific client, but a generic or universal model - at least that's the objective.

Actually, the primary objective is educational, raising awareness of information risk and security: the model policies we offer are merely mechanisms to set people thinking and talking. It's an added bonus if they end up with effective security policies based on ours but even if they don't, I hope they have been stimulated to review, consider and discuss their options.

Aug 21, 2018

There are parallels between quality assurance and information security. For example, we all partly depend on various
suppliers for their [quality|security], hence we need assurance as to the suppliers’ [quality|security] arrangements.

In ISO-land, the
preferred approach to this is systematic i.e. we:

Identify and consider the [quality|information|business] requirements and risks associated with the relationships, supplies, services etc., separately and perhaps in conjunction with
the suppliers;

Evaluate the risks (obtaining further information if needed), deciding what to do about them, prioritizing and resourcing things accordingly;

Treat them appropriately according to the risks
themselves, the level of assurance required and the business situation;

Manage, monitor and maintain the arrangements, occasionally reviewing the risks
and controls etc.

In more detail, there
are several forms of treatment.We can:

Review, inspect or audit the suppliers
in sufficient depth, focusing on the parts of their businesses that materially
affect the [quality|security] of the services provided (note: there are
many subsidiary options and factors to consider here, such as the frequency and nature
of the reviews and the competence and diligence of the reviewers);

Simply ignore the issue, blindly trusting
the suppliers to do the right things and do things right (crudely accepting the risks is an abdication of responsibility without additional controls but is disappointingly commonplace in practice, at least outside of ISO-land);

Rely on the suppliers’ assertions re
their [quality|security] arrangements, ideally with the benefit of accredited certification;

Obtain and evaluate additional internal
information from the suppliers re their [quality|security] arrangements –
their [quality|security] metrics for instance, and various reports, policies, procedures etc.;

Collaborate closely with the
suppliers, establishing mutual trust and respect over a considerable
period;

Throw the whole issue at the lawyers
to thrash out suitable terms and conditions, requirement specifications,
liability clauses, guarantees etc. (again, that approach in isolation does
not inspire me personally with confidence, unless supported with suitable additional
assurance and compliance controls);

Manage the [quality|security] aspects
dynamically according to the situations, incidents and near-misses that occur
and any opportunities that arise (the contingency approach - whatever it is, we'll cope - also expressed as "She'll be right bro" in this part of the world);

‘Instrument’ the business processes
and activities for [quality|security], ensuring that the [quality|security]
situation is measured and communicated promptly, projected accurately etc.
(this implies dealing with the measurement costs plus the sensitivity and commercial value of the information,
naturally, and has further implications around its integrity and availability);

Focus on business continuity,
resilience and recovery e.g. maintaining a network of alternative suppliers,
using generic/commodity services as much as possible;

Keep all business-critical activities
entirely in-house, consciously avoiding the risks of reliance on suppliers/outsiders
(easier said than done!);

Aug 17, 2018

A vulnerability is an inherent weakness in something (a device, system, process, situation, person etc.) that might be exploited by a threat, perhaps causing an impact of some sort.

Vulnerability exists regardless of the presence or absence of controls: the lack of control is a separate matter, a fundamentally different concept although often confused by non-experts and even by some so-called experts.

Take, for instance, the risk of being burgled at home.

The primary threat is the burglars - the criminals who might just pick a given home to burgle. There are other threats too (e.g. untrustworthy visitors and opportunists) but let's leave it at that for now.

The primary impact on the homeowner is the loss of their assets - the valuables that are stolen. Again, there are other impacts (e.g. the traumatic feelings of their personal space being violated, and the implied or actual safety threat). The impacts of burglary differ according to one's perspective. To the home owner or occupier, the financial replacement cost, disruption and emotional toll are all potentially significant impacts. To society, burglary rates can affect the popularity of particular areas, leading to societal and cultural changes. To insurance companies, the impacts of burglary include insurance claims and payouts ... plus increased custom (a positive business impact or opportunity for them).

So what are the vulnerabilities?

Some would claim that the lack of a burglar alarm is a vulnerability ... but, no, strictly speaking that would simply be a missing control, not an inherent weakness.

Inherent weaknesses include the concept of 'home' i.e. a place to live plus property that someone considers exclusively 'theirs'. If it weren't for the very notion of assets and property ownership, we would not feed so hard-done-by if burglars removed 'our' assets, since they would, in effect, own and have the same rights over them as we do. In law, this leads to the crime of conversion, larceny or theft: a criminal can only 'steal' things from me if I 'own' them. They would be depriving me of the rights over the property that lawful property owners can reasonably expect to enjoy. It's a mixture of possession and control, in the sense that, say, a ransomware infection takes possession of the data and controls access to it, without literally removing it.

There are other vulnerabilities to burglary such as:

The visibility and attractiveness of the place to burglars which, arguably, is greater relative to neighbouring properties if there is no obvious alarm, if the place appears unoccupied, if doors and windows are left open etc.;

The need to admit various people for legitimate purposes e.g. tradesmen, the emergency services and debt collectors, friends and family;

Welcome mats, house parties and various other invitations to visit or enter e.g. tenants, guests, 'open house' marketing promotions and parties;

At a societal level, factors such as widespread and harsh socio-economic hardship increase the threat of burglary in afflicted areas, hence the conditions that caused or led to that situation might be termed vulnerabilities - 'contributory factors', perhaps.

Conceptually, we've come a long way from 'lack of a burglar alarm'!

If you're still not convinced of the difference, can I persuade you to buy my magic crystal? The crystal emits a particular form of energy that burglars find intolerable. They are literally too uncomfortable to approach or enter the property. Without it, you are highly vulnerable. A snip at just $20 per gram (minimum 500 grams, delivery, installation and sales tax extra).

One of the more unusual information risks on our radar for September's outsider threats awareness module is xenophobia - the fear of strangers. It has a deep biological basis: most animals naturally congregate and live with others of their kind, forming social groups (families, flocks, tribes etc.) while excluding those who are 'different' - most obviously predators. The differences aren't always obvious to us humans. Sheep, for instance, recognize each other more by sound and smell than by color.

Compared to other risks in this domain, xenophobia is fairly widespread, putting it roughly half way along the probability scale. But what of the business impacts of xenophobia afflicting employees? Hmmm, not so easy. As is often the way, the consequences depend on the circumstances or context in which incidents may occur. In this specific case, there may even be benefits (such as spotting possible intruders - corporate predators!) as well as adverse impacts (such as racism). Personally, on balance and bearing in mind the other outsider threats we're also concerned about, I'd put the impacts towards the bottom of the scale, putting xenophobia somewhere left of center on the generic Probability Impact Graph ...

... but it doesn't end there. How does the xenophobia information risk compare to others? I've shown just one other risk here of the 8 or so we have identified already as an indication of what we mean by 'information risk', and to illustrate the range. In our estimation, the risk of a "Targeted hack or malware attack" is slightly less likely than xenophobia but has a significantly higher impact on the organization if it does occur.

OK, are you with me so far? What are you thinking at this point? My guess is that you're either cruising along, going with the flow, or puzzling over the meanings, implications and positions of those two information risks. Maybe that prior almost incidental mention of racism has lit your blue touchpaper already, and maybe you don't consider xenophobia even remotely relevant to the topic at hand. Perhaps you would put xenophobia elsewhere on the PIG, or split it into various incidents with differing implications - and likewise with the other risk. Possibly you are confused over the meaning of xenophobia, or consider it something that insiders might have and therefore out of scope of the outsider threats topic ...

Fantastic! In terms of the key objectives of security awareness and training, the PIG is working nicely: it has set you thinking about the topic area, considering those two risks, comparing and contrasting them.

Now imagine there are another 6+ information risks plonked on the same PIG, described in fairly straightforward terms and analyzed subjectively in the much same manner, with similar issues and concerns arising ... and you'll appreciate the power of this technique, especially in a group setting such as a risk workshop or online discussion forum. It is both creative/stimulating and analytical/pragmatic, leading naturally in to the discussions around what ought to be done about the information risks, particularly any in the red zone (clear priorities). It harnesses the group's expertise and experience, challenges prejudices and biases, and helps people contemplate quite complex matters productively.

Aug 11, 2018

The ISO27k "information security risk management" standard ISO/IEC 27005:2011 has been revised and re-published ... but you'll be hard pushed to see any difference.

This is an 'interim update' reflecting the 2013 revisions of ISO/IEC 27001 and 27002. Yes, 2013, five years ago. The original 27005 update project fell off the rails, leading eventually to this minimal revision, kind of like a program patch to address shortcomings rather than a new version with improved functionality.

A full revision is now in the works, so with luck the next version of 27005 might just be released to coincide with updates to the core ISO27k standards 27001 and 27002.

Ever the optimist, I like to think there's a fighting chance the next version will be a major improvement with changes such as:

Defining ‘information risk’ formally (properly), clearly, helpfully and without the torture and ambiguity of the current gibberish around 'information security risk', explaining it in accessible and understandable terms;

Outlining the organizational/business context for information risk management - how it relates to the management of other kinds of risk, and how risk management supports management and governance of the organization;

Outlining the core risk management process, elaborating on each of the activities in more depth, offering pragmatic advice on suitable methods and approaches (e.g. the four ways to treat risk; how to measure, evaluate and compare risks; how to spot and react to changes, and how to predict changes using trends, statistical techniques and situational awareness);

Aug 10, 2018

September's awareness seminar for management on "outsider threats" is coming along nicely.

This week I've been researching the web (well, OK, Googling) and exploring opinions, firstly on what "outsider threats" are, and secondly what to do about them.

It has been a frustrating few days, digging up the odd insightful nugget hidden under piles of tripe gently steaming away in Google-land.

A disappointing majority of commentators seem oblivious to the distinctions between "threat", "vulnerability" and "risk", their confused language more than merely hinting at a fundamental lack of understanding of the concepts that underpin the field. One piece in particular made me laugh out loud, muddling up impacts with exposure. [To be clear, over-exposure to the sun makes you red and sore. Melanoma is the impact. Muddle them up at your peril!]

Several are stubbornly and myopically focused on cyber, a few even defining "outsider threats" as if there is nothing but IT to worry about. If only it were that easy! Knock yerself out tackling hackers and malware, mate, while I get to grips with All The Rest Of It. Yes, I know you have a tough job. Yes I know those haxx0rs and VXers are evl, cunning buggrz. And no, you don't deserve a raise for being a superhero.

Today, I've made the decision to explain the process of managing information risks, again, using outsider threats specifically to illustrate the steps. I say "again" because information risk management is one of the home bases to which we return in almost every NoticeBored module. It's one of the handbags we always dance around, so to speak. It's an old friend that's never out of line.

So, here's slide 13 from the management slide deck, a process overview that we'll build up over the 8 preceding slides using typical examples of "outsider threats" ... and vulnerabilities ... and impacts to explain each step, bringing the cascade to life. It's part awareness, part teaching, part exploring the topic, part demonstrating techniques.

The trick, though, is to find engaging and insightful situations to illustrate each step. Drawing the process diagram took minutes. Preparing the sequence of slides, a few more minutes. Thinking up relevant examples will take me all weekend ... but luckily I can think about this while Doing Other Stuff - lambs to count, trees to plant, ditches to dig, that sort of thing.

Actions taken through the use of an information system or network that result in an actual
or potentially adverse effect on an information system, network, and/or the information
residing therein. See incident. See also event, security-relevant event, and intrusion.

The interdependent network of information technology infrastructures, and includes the
Internet, telecommunications networks, computer systems, and embedded processors and
controllers in critical industries.
Source: NSPD-54/HSPD-23

cyberspace attack

Cyberspace actions that create various direct denial effects (i.e. degradation, disruption,
or destruction) and manipulation that leads to denial that is hidden or that manifests in the
physical domains.
Source: DoD JP 3-12

cyberspace capability

A device, computer program, or technique, including any combination of software,
firmware, or hardware, designed to create an effect in or through cyberspace.
Source: DoD JP 3-12

The employment of cyberspace capabilities where the primary purpose is to achieve
objectives in or through cyberspace.
Source: DoD JP 3-0

cyberspace
superiority

The degree of dominance in cyberspace by one force that permits
the secure, reliable conduct of operations by that force, and its related land, air,
maritime, and space forces at a given time and place without prohibitive interference by
an adversary.
Source: DoD JP 3-12

Deliberate, authorized defensive measures or activities taken outside of the defended
network to protect and defend Department of Defense (DoD) cyberspace capabilities or
other designated systems.
Source: DoD JP 3-12

malicious cyber
activity

Activities, other than those authorized by or in accordance with U.S. law, that seek to
compromise or impair the confidentiality, integrity, or availability of computers,
information or communications systems, networks, physical or virtual infrastructure
controlled by computers or information systems, or information resident thereon.
Source: PPD 20

non-person entity
(NPE)

An entity with a digital identity that acts in cyberspace, but is not a human actor. This can
include organizations, hardware devices, software applications, and information artifacts.
Source: DHS OIG 11-121

offensive cyberspace
operations (OCO)

Cyberspace operations intended to project power by the application of force in or through
cyberspace.
Source: DoD JP 3-12

persona

In military cyberspace operations, an abstraction of logical cyberspace with digital
representations of individuals or entities in cyberspace, used to enable analysis and
targeting. May be associated with a single or multiple entities.
Source: DoD JP 3-12

proactive cyber
defense

A continuous process to manage and harden devices and networks according to known
best practices.
Source: DSOC 2011

Red Team

A group of people authorized and organized to emulate a potential adversary’s attack or
exploitation capabilities against an enterprise’s security posture. The Red Team’s
objective is to improve enterprise cybersecurity by demonstrating the impacts of
successful attacks and by demonstrating what works for the defenders (i.e., the Blue
Team) in an operational environment. Also known as Cyber Red Team.

regenerative cyber
defense

The process for restoring capabilities after a successful, large scale cyberspace attack,
ideally in a way that prevents future attacks of the same nature.
Source: DSOC 2011

The glossary also notes that "cybersecurity" supersedes both "computer security (COMPUSEC)" and "information assurance (IA)".

So, the CNSSI definition for "cybersecurity" plus a few other cyber-terms cite "NSPD-54/HSPD-23" as their source. That, in turn, appears to be a 2008 National Security Policy Directive from the White House originally classified top secret but then disclosed (with redactions) under the Freedom of Information Act in 2014 ... which goes some way towards explaining the ongoing confusion over cyber-terms. They could have elaborated but they'd have had to shoot us.

By the way, the CNSSI glossary also defines information security:

The protection of information and information systems from unauthorized access, use,
disclosure, disruption, modification, or destruction in order to provide confidentiality,
integrity, and availability.
Source: 44 U.S.C. Sec 3542

Perhaps you might like to compare and contrast that against the cybersecurity definition. I've got better things to do right now: time to check for any more lambs.

Aug 8, 2018

We're pretty busy in the IsecT office but it's not all work. While the Northern hemisphere seems to be burning up, the arrival of our first Spring lamb this morning signals our emergence from the depths of a chilly wet NZ Winter.

It's a boy, weighing about 3 kilos I guess. Mother and son are doing fine. She's always had knock-knees that one! He stays close - already into safety and security at about 8 hours old.

It's the obvious follow-up, a twin for August's module on "insider threats". This month's scope is reasonably straightforward except that once again we face the issue of people and organizations spanning organizational boundaries - contractors, consultants, temps, interns, ex-employees etc. plus outsiders colluding with, socially engineering, manipulating, fooling or coercing insiders. Maybe there's enough there for a further awareness module at some future point, turning the twins into triplets!

For now we'll stick to Plan A, focusing on threatening outsiders of which there are many, quite a variety in fact. For completeness, we should probably mention benign, accidental or incidental outside threats and we'll definitely pick up on vulnerabilities and impacts in the risk analysis, as well as exploring ways to avoid or mitigate outsider threats.

Leaning back from the keyboard, it occurs to me that there is no shortage of relevant issues here for awareness and training purposes - the very opposite in fact. Even at this early stage I'm already thinking about narrowing the scope.

Traditional IT/cybersecurity awareness approaches would barely have touched these topics, focusing purely on technology-related threats such as hackers. Broadening our perspective makes NoticeBored a more comprehensive service and, we trust, more interesting, engaging and thought-provoking, and more valuable. We'll bring up hacking, of course, and a whole lot more besides.

If your security awareness program consists of a few dog-eared posters and dire warning notices along the lines of "Comply with the policies ... or face the consequences", don't be surprised if bored stiff workers simply tune out. "La la la, can't hear you, don't see you ...". Worse still, the ones who pay attention find out about a narrow strip of a long, long tapestry. What are the chances of that strip covering all they ought to know, everything that matters? Not good.

Aug 4, 2018

This week I’ve been chatting with Aussie infosec blogger Endre about security policies. Although Endre elaborated very eloquently on the tradecraft of policy-writing, I don't think he had considered the variety of audiences/users of policies and their purposes. That diversity should be borne in mind when writing policies and supporting materials (guidelines, courses etc.), and when designing and documenting the associated processes/activities (awareness, training, oversight, compliance, metrics …) - an additional level of finesse to the tradecraft.

Today, a similar issue cropped up on the ISO27k Forum: Jose asked whether his organization might prepare, say, a single scoping document for multiple Management Systems sharing the same scope. Chris pointed out that there are several audiences for the MS documents, saying "You can structure the documents however you want as long as you are meeting the requirements of the standard but don’t forget that these documents need to be used by people."

Any Management System has several audiences/users and purposes. Its primary purpose (I would argue) is to allow the organization to manage stuff systematically and rationally, hence management is obviously the main audience/user, plus other stakeholders such as the organization’s owners, authorities, dependent business partners etc. with an interest in sound management. Certified compliance with, say, the ISO standards is a secondary purpose, along with assurance, demonstrable professionalism, adoption of good practices, continuous improvement and all that … with their respective audiences/users.

There's quite a bit of complexity there already, talking about individual Management Systems. It is hardly going to be simpler to implement multiple Management Systems in parallel - so why do it?

Commonality between Management Systems is not an objective but a means to an end: it should improve the net value (business benefits less costs) in various ways - simplifying and standardizing things, increasing familiarity, reducing duplication etc. … implying the need for deeper analysis to elaborate on and optimize the overall value, perhaps even a business case. From there, it may or may not make sense to develop “Acme Corp’s Management Systems Unified Scope”, perhaps with a very bland and generic overall statement supported by more detailed appendices for each Management System … but at the end of the day, that’s just formal high-level documentation. In the same way, the supporting lower-level documents should also be designed to optimize the overall value and utility for diverse purposes and audiences. Are those to be designed and written in the same format? At what point (if any) in the hierarchy does it make more sense to separate the documents completely?There will be both similarities and differences in the processes and activities being managed by the Management Systems. Any ‘risk management’ process, for instance, is clearly about ‘managing risks’ but the nature of the risks varies, hence the risk identification and analysis varies along with any subject matter experts who are involved, and the management, operational activities and controls arising.

It is feasible to document the common elements of “Acme Corp’s Risk Management Process” at an overall level, perhaps supported by appendices elaborating on the differences for each kind of risk with the respective audiences in mind. Maybe. But that may not be helpful: there will be similarities between the approaches in practice even without a common overall document, and it may be more important to allow the individual approaches to vary according to the context (e.g. information, privacy and safety risks have some overlap but substantial differences too: trying to shoehorn them all through the exact same standard risk management process could be sub-optimal for each of them).