Category: people

This isn’t a password problem. It’s a misunderstanding-of-what-accounts-are-for problem.

Once a year or so, one of the big UK tech magazines or websites[1] does a survey where they send a group of people to one of the big London train stations and ask travellers for their password[3]. The deal is that every traveller who gives up their password gets a pencil, a chocolate bar or similar.

I’ve always been sad that I’ve never managed to be at the requisite station for one of these polls. I would love to get a free pencil – or even better a chocolate bar[4] – for lying about a password. Or even, frankly, for giving them one of my actual passwords, which would be completely useless to them without some identifying information about me. Which I obviously wouldn’t give them. Or again, would pretend to give them, but lie.

The point of this exercise[5] is supposed to be to expose the fact that people are very bad about protecting their passwords. What it actually identifies is that a good percentage of the British travelling public are either very bad about protecting their passwords, or are entirely capable of making informed (or false) statements in order to get a free pencil or chocolate bar[4]. Good on the British travelling public, say I.

Now, everybody agrees that passwords are on their way out, as they have been on their way out for a good 15-20 years, so that’s nice. People misuse them, reuse them, don’t change them often enough, etc., etc.. But it turns out that it’s not the passwords that are the real problem. This week, more than one British MP admitted – seemingly without any realisation that they were doing anything wrong – that they share their passwords with their staff, or just leave their machines unlocked so that anyone on their staff can answer their email or perform other actions on their behalf.

This isn’t a password problem. It’s a misunderstanding-of-what-accounts-are-for problem.

People seem to think that, in a corporate or government setting, the point of passwords is to stop people looking at things they shouldn’t.

That’s wrong. The point of passwords is to allow different accounts for different people, so that the appropriate people can exercise the appropriate actions, and be audited as having done so. It is, basically, a matching of authority to responsibility – as I discussed in last week’s post Explained: five misused security words – with a bit of auditing thrown in.

Now, looking at things you shouldn’t is one action that a person may have responsibility for, certainly, but it’s not the main thing. But if you misuse accounts in the way that has been exposed in the UK parliament, then worse things are going to happen. If you willingly bypass accounts, you are removing the ability of those who have a responsibility to ensure correct responsibility-authority pairings to track and audit actions. You are, in fact, setting yourself up with excuses before the fact, but also making it very difficult to prove wrongdoing by other people who may misuse an account. A culture that allows such behaviour is one which doesn’t allow misuse to be tracked. This is bad enough in a company or a school – but in our seat of government? Unacceptable. You just have to hope that there are free pencils. Or chocolate bars[4].

1. I can’t remember which, and I’m not going to do them the service of referencing them, or even looking them up, for reasons that should be clear once you read the main text.[2]

2. I’m trialling a new form or footnote referencing. Please let me know whether you like it.

3. I guess their email password, but again, I can’t remember and I’m not going to look it up.

No, really: what’s the point of it? I’m currently at the Openstack Summit in Sydney, and was just thrown out of the exhibition area as it’s closed for preparations for an event. A fairly large gentleman in a dark-coloured uniform made it clear that if I didn’t have taken particular sticker on my badge, then I wasn’t allowed to stay. Now, as Red Hat* is an exhibitor, and I’m our booth manager’s a buddy**, she immediately offered me the relevant sticker, but as I needed a sit down and a cup of tea***, I made my way to the exit, feeling only slightly disgruntled.

“What’s the point of this story?” you’re probably asking. Well, the point is this: I’m 100% certain that if I’d asked the security guard why he was throwing me out, he wouldn’t have had a good reason. Oh, he’d probably have given me a reason, and it might have seemed good to him, but I’m really pretty sure that it wouldn’t have satisfied me. And I don’t mean the “slightly annoyed, jet-lagged me who doesn’t want to move”, but the rational, security-aware me who can hopefully reason sensibly about such things. There may even have been such a reason, but I doubt that he was privy to it.

My point, then, really comes down to this. We need be able to explain the reasons for security policies in ways which:

Can be expressed by those enforcing them;

Can be understood by uninformed users;

Can be understood and queried by informed users.

In the first point, I don’t even necessarily mean people: sometimes – often – it’s systems that are doing the enforcing. This makes things even more difficult in ways: you need to think about UX***** in ways that are appropriate for two very, very different constituencies.

I’ve written before about how frustrating it can be when you can’t engage with the people who have come up with a security policy, or implemented a particular control. If there’s no explanation of why a control is in place – the policy behind it – and it’s an impediment to the user*******, then it’s highly likely that people will either work around it, or stop using the system.

Unless. Unless you can prove that the security control is relevant, proportionate and beneficial to the user. This is going to be difficult in many cases. But I have seen some good examples, which means that I’m hopeful that we can generally do it if we try hard enough. The problem is that most designers of systems don’t bother trying. They don’t bother to have any description, usually, but as for making that description show the relevance, proportionality and benefits to the user? Rare, very rare.

This is another argument for security folks being more embedded in systems design because, like it or not, your users are part of your system, and they need to be included in the design. Which means that you need to educate your UX people, too, because if you can’t convince them of the need to do security, you’re sure as heck not going to be able to convince your users.

And when you work with them on the personae and use case modelling, make sure that you include users who are like you: knowledgeable security experts who want to know why they should submit themselves to the controls that you’re including in the system.

Now, they’ve opened up the exhibitor hall again: I’ve got to go and grab myself a drink before they start kicking people out again.

*my employer.

**she’s American.

***I’m British****.

****and the conference is in Australia, which meant that I was actually able to get a decent cup of the same.

*****this, I believe, stands for “User eXperience”. I find it almost unbearably ironic that the people tasked with communicating clearly to us can’t fully grasp the idea of initial letters standing for something.

… security as a topic is one which is interesting, fast-moving and undeniably sexy…

DISCLAIMER/STATEMENT OF IGNORANCE: a number of regular readers have asked why I insist on using asterisks for footnotes, and whether I could move to actual links, instead. The official reason I give for sticking with asterisks is that I think it’s a bit quirky and I like that, but the real reason is that I don’t know how to add internal links in WordPress, and can’t be bothered to find out. Apologies.

I don’t know if you’ve noticed, but pretty much everything out there is “next generation”. Or, if you’re really lucky “Next gen”. What I’d like to talk about this week, however, is the actual next generation – that’s people. IT people. IT security people. I was enormously chuffed* to be referred to on an IRC channel a couple of months ago as a “greybeard”***, suggesting, I suppose, that I’m an established expert in the field. Or maybe just that I’m an old fuddy-duddy***** who ought to be put out to pasture. Either way, it was nice to come across young(er) folks with an interest in IT security******.

So, you, dear reader, and I, your beloved protagonist, both know that security as a topic is one which is interesting, fast-moving and undeniably******** sexy – as are all its proponents. However, it seems that this news has not yet spread as widely as we would like – there is a worldwide shortage of IT security professionals, as a quick check on your search engine of choice for “shortage of it security professionals” will tell you.

Last week, I attended the Open Source Summit and Linux Security Summit in LA, and one of the keynotes, as it always seems to be, was Jim Zemlin (head of the Linux Foundation) chatting to Linus Torvalds (inventor of, oh, I don’t know). Linus doesn’t have an entirely positive track record in talking about security, so it was interesting that Jim specifically asked him about it. Part of Linus’ reply was “We need to try to get as many of those smart people before they go to the dark side [sic: I took this from an article by the Register, and they didn’t bother to capitalise. I mean: really?] and improve security that way by having a lot of developers.” Apart from the fact that anyone who references Star Wars in front of a bunch of geeks is onto a winner, Linus had a pretty much captive audience just by nature of who he is, but even given that, this got a positive reaction. And he’s right: we do need to make sure that we catch these smart people early, and get them working on our side.

Later that week, at the Linux Security Summit, one of the speakers asked for a show of hands to find out the number of first-time attendees. I was astonished to note that maybe half of the people there had not come before. And heartened. I was also pleased to note that a good number of them appeared fairly young*********. On the other hand, the number of women and other under-represented demographics seemed worse than in the main Open Source Summit, which was a pity – as I’ve argued in previous posts, I think that diversity is vital for our industry.

This post is wobbling to an end without any great insights, so let me try to come up with a couple which are, if not great, then at least slightly insightful:

we’ve got a job to do. The industry needs more young (and diverse talent): if you’re in the biz, then go out, be enthusiastic, show what fun it can be.

if showing people how much fun security can be, encourage them to do a search for “IT security median salaries comparison”. It’s amazing how a pay cheque********** can motivate.

*note to non-British readers: this means “flattered”**.

**but with an extra helping of smugness.

***they may have written “graybeard”, but I translate****.

****or even “gr4yb34rd”: it was one of those sorts of IRC channels.

*****if I translate each of these, we’ll be here for ever. Look it up.

******I managed to convince myself******* that their interest was entirely benign though, as I mentioned above, it was one of those sorts of IRC channels.

*******the glass of whisky may have helped.

********well, maybe a bit deniably.

*********to me, at least. Which, if you listen to my kids, isn’t that hard.

One of the recurring arguments against affirmative action from majority-represented groups is that it’s unfair that the under-represented group has comparatively special treatment.

Fair warning: this is not really a blog post about IT security, but about issues which pertain to our industry. You’ll find social sciences and humanities – “soft sciences” – referenced. I make no excuses (and I should declare previous form*).

Warning two: many of the examples I’m going to be citing are to do with gender discrimination and imbalances. These are areas that I know the most about, but I’m very aware of other areas of privilege and discrimination, and I’d specifically call out LGBTQ, ethnic minority, age, disability and non-neurotypical discrimination. I’m very happy to hear (privately or in comments) from people with expertise in other areas.

You’ve probably read the leaked internal document (a “manifesto”) from a Google staffer talking challenging affirmative action to try to address diversity, and complaining about a liberal/left-leaning monoculture at the company. If you haven’t, you should: take the time now. It’s well-written, with some interesting points, but I have some major problems with it that I think it’s worth addressing. (There’s a very good rebuttal of certain aspects available from an ex-Google staffer.) If you’re interested in where I’m coming from on this issue, please feel free to read my earlier post: Diversity in IT security: not just a canine issue**.

There are two issues that concern me specifically:

no obvious attempt to acknowledge the existence of privilege and power imbalances;

the attempt to advance the gender essentialism argument by alleging an overly leftist bias in the social sciences.

I’m not sure that these approaches are intentional or unconscious, but they’re both insidious, and if ignored, allow more weight to be given to the broader arguments put forward than I believe they merit. I’m not planning to address those broader issues: there are other people doing a good job of that (see the rebuttal I referenced above, for instance).

Before I go any further, I’d like to record that I know very little about Google, its employment practices or its corporate culture: pretty much everything I know has been gleaned from what I’ve read online***. I’m not, therefore, going to try to condone or condemn any particular practices. It may well be that some of the criticisms levelled in the article/letter are entirely fair: I just don’t know. What I’m interested in doing here is addressing those areas which seem to me not to be entirely open or fair.

Privilege and power imbalances

One of the recurring arguments against affirmative action from majority-represented groups is that it’s unfair that the under-represented group has comparatively special treatment. “Why is there no march for heterosexual pride?” “Why are there no men-only colleges in the UK?” The generally accepted argument is that until there is equality in the particular sphere in which a group is campaigning, then the power imbalance and privilege afforded to the majority-represented group means that there may be a need for action to help for members the under-represented group to achieve parity. That doesn’t mean that members of that group are necessarily unable to reach positions of power and influence within that sphere, just that, on average, the effort required will be greater than that for those in the majority-privileged group.

What does all of the above mean for women in tech, for example? That it’s generally harder for women to succeed than it is for men. Not always. But on average. So if we want to make it easier for women (in this example) to succeed in tech, we need to find ways to help.

The author of the Google piece doesn’t really address this issue. He (and I’m just assuming it’s a man who wrote it) suggests that women (who seem to be the key demographic with whom he’s concerned) don’t need to be better represented in all parts of Google, and therefore affirmative action is inappropriate. I’d say that even if the first part of that thesis is true (and I’m not sure it is: see below), then affirmative action may still be required for those who do.

The impact of “leftist bias”

Many of the arguments presented in the manifesto are predicated on the following thesis:

the corporate culture at Google**** are generally leftist-leaning

many social sciences are heavily populated by leftist-leaning theorists

these social scientists don’t accept the theory of gender essentialism (that women and men are suited to different roles)

HENCE if a truly diverse culture is to be encouraged within corporate culture, leftist theories such as gender essentialism should be rejected.

There are several flaws here, one of which is that diversity means accepting views which are anti-diverse. It’s a reflection of a similar right-leaning fallacy that in order to show true tolerance, the views of intolerant people should be afforded the same privilege of those who are aiming for greater tolerance.*****

Another flaw is the argument that just because a set of theories is espoused by a political movement to which one doesn’t subscribe that it’s therefore suspect.

Conclusion

As I’ve noted above, I’m far from happy with much of the so-called manifesto from what I’m assuming is a male Google staffer. This post hasn’t been an attempt to address all of the arguments, but to attack a couple of the underlying arguments, without which I believe the general thread of the document is extremely weak. As always, I welcome responses either in comments or privately.

*my degree is in English Literature and Theology. Yeah, I know.

**it’s the only post on which I’ve had some pretty negative comments, which appeared on the reddit board from which I linked it.

***and is probably therefore just as far off the mark as anything else that you or I read online.

****and many other tech firms, I’d suggest.

*****an appeal is sometimes made to the left’s perceived poster child of postmodernism: “but you say that all views are equally valid”. That’s not what postmodern (deconstructionist, post-structuralist) theory actually says. I’d characterise it more as:

all views are worthy of consideration;

BUT we should treat with suspicion those views held by those which privilege, or which privilege those with power.

As you may have noticed*, there was somewhat of a commotion over the past week when the WannaCrypt ransomware infection spread across the world, infecting all manner of systems**, most notably, from my point of view, many NHS systems. This is relevant to me because I’m UK-based, and also because I volunteer for the local ambulance service as a CFR. And because I’m a security professional.

I’m not going to go into the whys and wherefores of the attack, of the importance of keeping systems up to date, the morality of those who spread ransomware***, how to fund IT security, or the politics of patch release. All of these issues have been dealt with very well elsewhere. Instead, I’m going to discuss talking to people.

I’m slightly hopeful that this most recent attack is going to have some positive side effects. Now, in computing, we’re generally against side effects, as they usually have negative unintended consequences, but on Monday, I got a call from my Dad. I’m aware that this is the second post in a row to mention my family, but it turns out that my Dad trusts me to help him with his computing needs. This is somewhat laughable, since he uses a Mac, which employs an OS of which I have almost no knowledge****, but I was pleased that he even called to ask a question about it. The question was “am I safe from this ransomware thing?” The answer, as he’d already pretty much worked out was, “yes”, and he was also able to explain that he was unsurprised, because he knew that Macs weren’t affected, and because he keeps it up to date, and because he keeps backups.

Somebody, somewhere (and it wasn’t me on this occasion) had done something right: they had explained, in terms that my father could understand, not only the impact of an attack, but also what to do to keep yourself safe (patching), what systems were most likely to be affected (not my Dad’s Mac), and what do to in mitigation (store backups). The message had come through the media, but the media, for a change, seemed to have got it correct.

I’ve talked before about the importance of informing our users, and allowing them to make choices. I think we need to be honest, as well, about when things aren’t going well, when we (singularly, or communally) have made a mistake. We need to help them to take steps to protect themselves, and when that fails, to help them clear things up.

And who was it that made the mistake? The NSA, for researching vulnerabilities, or for letting them leak? Whoever it was leaked them? Microsoft, for not providing patches? The sysadmins, for not patching? The suits, for not providing money for upgrades? The security group, putting sufficient controls in place to catch and contain the problem? The training organisation for not training the users enough? The users, for ignoring training and performing actions which allowed the attack to happen?

Probably all of the above. But, in most of those cases, talking about the problem, explaining what to do, and admitting when we make a mistake, is going to help improve things, not bring the whole world crashing down around us. Talking, in other words, to “real” people (not just ourselves and each other*****): getting out there and having discussions.

Sometimes a lubricant can help: tea, beer, biscuits******. Sometimes you’ll even find that “real” people are quite friendly. Talk to them. In words they understand. But remember that even the best of them will nod off after 45 minutes or so of our explaining our passion to them. They’re only human, after all.

*unless you live under a rock.

**well, Windows systems, anyway.

***(scum).

****this is entirely intentional: the less I know about their computing usage, the easier it is for me to avoid providing lengthy and painful (not to mention unpaid) support services to my close family.

*****and our machines. Let’s not pretend we don’t do that.

******probably not coffee: as a community, we almost certainly drink enough of that as it is.