Part 1 of this blog post gave a summary of some of the issues we face when trying to detect, prevent and respond to phishing attacks. The upshot is that it isn’t easy. Technical controls are theoretically possible but in any organisation there are technical, financial, political and social constraints. One thing we can all do is to try and educate our own users but is there any point and how can we do so in a way that is both effective and measurable?

The whole issue of security awareness and education is often a contentious one. Awareness programs tend to be compliance driven focussing on areas like data protection but it is hard to know whether this type of activity has any meaningful effect in terms of actual security. When it comes to phishing there are also some pretty good arguments against training and awareness. Despite having a (hopefully) intelligent population here at the University of Oxford, issues of recent weeks and months show that there are still plenty of gullible users and this comes at a significant cost to the University. But even if we are dealing with up to 10 compromised accounts following a phishing attack involving 1000 phishing emails then it still means that 99% of users didn’t respond. While we are at it a significant number of people here actively report phishing to us now and this is one of the main “sensors” we use to detect attacks. Therefore awareness is pretty good right? How do you improve on 99%? Even if you do want to send out warnings how can you effectively warn people and inform them of the difference between legitimate emails and phishing emails. Phishing emails, after all are designed to look like real emails and every discussion we’ve had here about warning emails has been lengthy. How do you know whether anyone reads them or cares? What is the right message to get across? How you you get the message across accurately and succinctly but provide all the relevant information? Importantly how to you make it not appear to be a phish itself?

Perhaps then it isn’t worth bothering and, instead, focussing on technology-based solutions? Well, better technology certainly needs to be explored but I wonder whether looking at it like this is actually skewing the argument. If we were talking about other security controls we’d probably thinking in a more targeted way. For example, if you wanted to do some penetration testing for SQL injection vulnerabilities you wouldn’t necessarily just run your tests against every machine you run. First of all you’d probably audit your network to find out where SQL databases are available, run some initial testing to determine which of those might be vulnerable and then target the more agressive testing for those particular instances where the risk is especially high or where the vulnerabilities are suspected. So can awareness training also be more focused and can we target the users who are most vulnerable in a way that is measurable? I think, with phishing at least, the answer is probably yes and here is why and how…..

Why Phish?

Not many users fall for phishing twice so why not target your own users? What better way of finding out who in your organisation is vulnerable to phishing than actually phishing them before the bad guys do? That way you can actually target the 1%. Recently, however, the very suggestion of this idea amongst the IT community here led to some major objections so I’d like to understand in more detail what some of those objections are. First of all though I thought it would be worth presenting the arguments as I see them. Again, if this were some other security vulnerability/control and we were looking to do penetration testing then it probably wouldn’t even be an issue. So what is the difference with users? By phishing its own users an organisation can obtain genuine metrics based on those that report phishing, those that do nothing and those that actually fall for the scam. Not only can you provide some meaningful information on effectiveness of your program, you can target the training at the users who need it the most. And, just like for the bad guys, it is cheap, very simple to do, effective and so is something you can repeat on a regular basis. You could even throw in an incentive like entering those who report the emails as phish into some sort of prize draw. This isn’t something new we’ve just dreamt up either and is something that is promoted by Lance Spitzner of SANS as part of their “Securing the Human” program. If you want a more detailed overview of how to do it see Lance’s Webinar. There are also many other examples of similar campaigns and numerous tools available to use such as Wombat Security’s phishguru, Phishme and Phishline.

Concerns

A few people though have commented that they would have major objections to such an approach stating that it would be a “step too far”. Not all of these concerns have been qualified but those that I am aware of surround privacy and eroding the trust that users have in IT staff if we were seen to be trying to trick users. Let’s deal with privacy first, the main concern here being that the names of users replying to phishing emails would be revealed to management or others. Well, that might be a legitimate concern. After all we don’t want users thinking we are just trying to catch them out and get them into trouble. But I think that is fairly easy to overcome by only reporting repeat offenders of phishing campaigns. There are plenty of other ways to get the message across to people without the issue having to go via their own managers. Some have expressed concerns that users will actually give us their passwords. Whether this is a problem or not is debatable – if users do reply to us with their password we would reset it and get them to create a new one as we do whenever a user reveals their password currently. But, in any case, the technique could easily be set up so that we don’t receive that particular information.

In terms of trust then, again, I can see the argument here but I don’t believe it is an insurmountable problem. For example any awareness campaign could be announced to all users at the very beginning and as long as we communicate clearly and effectively with users who respond then I don’t see a major problem. When OxCERT replied to all users who complained that we had temporarily blocked Google Docs and explained the reasons, almost everyone of them fully understood the reasons for our actions. Besides the point here is that we don’t want users to trust emails asking for credentials regardless of where they come from and surely it is better for users that we try and phish them before the bad guys do? Of course there would undoubtedly be some people who would just be unhappy but we get this anyway. One senior academic here recently verbally abused our helpcentre staff and called the whole of IT Services b******s for making sure him/her change his/her password. This was after he/she had responded to a phishing scam. I’m not sure we should be going out of our way to avoid upsetting people like this.

An effective solution or major mistake?

So we have discussed a means of carrying out some form of penetration testing that is very cheap, easy and effective. At worst it will provide genuinely meaningful metrics that could be used to assess the state of our human defences and also (over a period of time) to demonstrate any improvement (or not) in those defences based on measures that we take. At best it will allow us to target our training at our most vulnerable users allowing them to protect themselves and protect University assets whilst we are at it. Yes, there may be pitfalls but nothing that I can think of that can’t be overcome with a little thought, advice and learning lessons from those who have carried out this type of activity already. So, we could carry on arguing over how to word warnings about phishing emails that no-one really reads anyway or whether or not to put links in emails. We could continue to play whack-a-mole with compromised accounts whilst everyone else tells us what we should be doing. We could, and should, explore technological solutions but that will inevitably take time and may not improve things anyway. Or we could do something different, cheap and effective. I would welcome your thoughts on which it should be.

11 Responses to “To Phish or Not to Phish? Part 2”

I think it’s fast becoming time to wonder why a large organisation with better things to do provides an external email service at all.

Not that people don’t need email, but there are providers of such on the wider internet.

An internal messaging system could be secured with more modern techniques and be closed to external users. Systems like Eduroam show that the use of messaging for almost all legitimate messaging could be carried out in that secured environment by HE/FE co-operation.

Who, of the technically savvy, lacks a personal email account and how many people without them couldn’t be shown how to acquire one?

Is it really worth the bother? An awful lot of useful things could be done with the money saved by places like Universities discontinuing their wild-west unauthenticated contractless SMTP-based email services, probably easily enough to create a closed, secured alternative, with legitimate chains of trust based on contract. A hundred institutions, each probably employing one or two people just to handle email: four million a year should produce something good.

I realise this is a controversial position, but I think it is worth proposing.

Just to be clear, are you proposing not running an email service at all or outsourcing to cloud providers? If the former are you aware of anyone who has done this and how would people know who they are emailing?

The proposal would be that no email service at all is run, at least for the vast majority of users. To avoid various degrees of future-shock an optional forwarding service could be run for incoming mail, or just for major institutional role addresses, or a rump email service for major roles, and a self-serve directory service including preferred contact details to allow administrative functions to be carried out by the institution.

Emails apparently coming from institutional addresses have very little true trust value. First, they could be faked; second they likely comprise an obscure user identifier in the user part which would require further lookup; third they could not be /that/ John Smith. As, if you need to verify an address you need to look it up (for example via the web), the domain part has little value. Similarly for sending mail, the chances are that I can’t guess someone’s address entirely, and the scope of the institution is weakly-defined enough that you get very few guarantees from knowing the domain part.

As with most people at other places, if I want to contact them and don’t know who they are I look them up (probably on the web) and use the address they provided based not only on the URL but also on the page content. Half the time people have a preferred address other than their institutional mail anyway. (For the last six years of my time at The Other Place, I just had an autoresponder on my institutional mail). Most academics move from place to place, so there’s a good chance an old institutional email in a paper, or whatever, is bust anyway.

External role addresses are a little different (hence my caveat above). But again, other than a few meta-addresses (like “abuse@”) someone outside the institution needs to look them up anyway, and shouldn’t be trusting unsigned emails from the blue purporting to be from provost@uni.ac.uk anyway. If that something comes from an odd domain makes them suspicious, all the better.

For efficient and secure transmission of confidential and authenticated internal messages another (vendor-neutral) infrastructure could be needed (and is currently desperately needed anyway), and that’s where current wasted effort dealing with lunatics selling dodgy watches could be focused, establishing chains of trust and contracts, Shib style. (Banks now simply say “we will never email you”, you will get messages through our (multifactor) messaging service, and other places could even do the same).

In the early days, there was great value to Universities being involved in developing these internet services by applying their guile and cunning, but this isn’t a frontier land any more and so it’s mainly size, power and money which keep your email service on top of these problems.

I’d be very much against out-sourcing or vendor-deals as an alternative or watered down version of this. These deals are very often hugs of the Boa Constrictor variety and I think there would rightly be objections re freedom.

I realise that if you were to try this idea early on at Oxbridge, then you’d get nowhere: just a load of folk in gowns putting out position papers, columns in the big papers, an inquiry from an ex-judge and some folk out on their ear. But at more progressive places, I think people might consider it. And then maybe Oxbridge would catch up a few decades down the line, .

Sorry Jonathan, I just think it’s a terrible idea to start setting up such phishing traps.

As someone who works closely with, and supports, staff and students, a vital part of a good working relationship is building and maintaining trust. Despite what you say, actively targeting unsuspecting members of the university breaks that trust. Nobody likes to look a fool and by catching people out (privately or publicly) you’re doing just that.

You suggest the matter of trust can be bolstered by clear and effective communication. This is predicated on that people i) read the bulk emails (I know many don’t) and ii) give a fig. If the person no longer trusts *any* emails from OUITS, then no level of ‘clear and effective’ communication will help build or bolster that trust.

You downplay the level of objection to the proposal but this is the first time I’ve heard that OUITS are actively contemplating it. No sane person would consider a discussion on an email list as a fair and balanced view of the subscribers of that list.

If this plan did go ahead, I’d feel obliged to inform the members of my department to ignore any and all emails concerning their accounts, whether from OUCS, OUITS, the Vice-Chancellor or God himself. Of course they’d also start to ignore emails from me, but they do that already.

Many thanks for your thoughts. I don’t mean to downplay the level of objection – in fact this blog post is specifically intended to gauge that and to getter a better overall picture of the pros and cons. IT Services aren’t contemplating this at the moment – it is just an idea – but if I did propose it I’d like to present a balanced argument and make sure that ITSS have had their chance to comment first.

In terms of the trust then i’m slightly intrigued. Putting myself in the position of the a gullible user then I can either respond to a blatant phish, have my account compromised, give bad guys access to my emails, contacts and other information, have my account disabled and potentially disrupt the entire email service. OR I could reply to a blatant phish, receive some sort of warning and advice (this would need some thought obviously) and carry on with my business hopefully wiser for the experience. For non-gullible users I would either ignore the emails (no problem) or report it as I always do (great) but with little difference as if it were a “genuine” phish.

Does this really run the risk of eroding trust too severely or could it actually build trust? Perhaps that is optimistic but is why I’m keen to hear from people who have already done this.

One final thought: If users didn’t *trust* any mails concerning their account would that be success or failure?

As long as victims are handled sensitively, which I’m sure they would be, then I think it’s a good addition to the other measures that are in place. If ITServices want to carry out a limited trial with a department, I volunteer to collect data about number of reports, and time spent dealing with the incidents.

Gathering metrics does seem a very useful endeavour, allowing us to locate the vulnerable groups so that we can target them with more education.

However, don’t we already have these metrics? i.e. from the existing attacks?

Assuming we do, then the only argument ‘for’ us phishing our own users is to catch the vulnerable users before the bad guys do. In this case it’s definitely a balance of risk against reward. The risk of irritating the user or losing their trust against the numbers of potential accounts that could be compromised by the next attack.

My feeling is that we should already have statistics which we can use to identify vulnerable groups. Using this information we can now work on strategies (e.g. anti-phishing posters in college common rooms) to target these groups.

Sorry – to expand on my point of risk against reward, we are risking the ire and loss of trust of anything up to 100% of email users with our own phishing attack. Balance that against the relatively small numbers of users actually caught in a genuine attack.

Well, yes we do have some statistics (which is what the rough awareness figures are based on) but not all phishing runs go out to everyone and the stats are just one of the advantages. The other being that you can get an immediate message across to vulnerable users and hopefully prevent their account from actually being compromised.

Of course there is an argument (as you say) that the phishers are already doing this job for us.

Jim,
For our institution, the metrics we could get from a penetration would be much more helpful than looking back at the logged incidents of compromised accounts. Lately we have discovered that the Phishers have the credentials of some accounts but do not use them immediately. So there is no way for us to track down the compromised accounts. I for one like this idea and may explore it for our institution as one part of a several pronged defense. We may combine it with a campus wide forced password reset to get our accounts back to baseline.