I’ve got a question regarding how much “mistake tolerance” is expected in the workplace.

Just to give you some background, I’m a (tech) team lead, which, in my case, means my daily job is not very different from that of other team members, except for the part that I get to make technical decisions concerning the projects we are doing. That includes, deadlines, technologies, methodologies, features to be included, etc. and most importantly, I decide whether a piece of work by any team member is acceptable. However, I don’t “manage” people; that is, I don’t give time off, I don’t give them feedback, I don’t decide their raise, etc. There’s a manager to do that.

Now to the main question. I have very low, almost zero, tolerance for mistakes. Whenever I see a mistake in anyone’s work, especially trivial ones, I will get very angry. The rationale in my head is always “We have ONE job and one job only, and that’s to get this done! No excuses.” As such, I will remove the person from the project, in addition to having a detailed (sometimes heated) conversation with both the person and our manager on why such mistakes are not allowed in my team.

So how bad is this? I know my intolerance could probably be attributed to some sort of OCD, and sort of know it is not good. But I just cannot forgive mistakes easily. Do you have any advice?

Yeah, what you’re doing sounds pretty bad.

I see two issues here: First, your expectations about normal amounts of errors are off. And second, you’re taking it really personally when mistakes happen and you’re having an emotional reaction where one isn’t warranted, rather than handling it professionally. (Which, as people are pointing out in the comment section, is a mistake in itself! So there’s some irony there.)

On the first issue, people are going to make mistakes because you work with humans, not robots, and humans make mistakes. If someone makes a mistake occasionally, that is normal — and you should see it as normal and not an outrage. Perhaps you’re the very rare person who truly never makes mistakes in your work. If so, you’re something of a unicorn. That’s not typical. If you are that unicorn, good for you — that’s a rare talent. But if you want to work with other people, you have to recognize that you’re not normal; if you expect others to be unicorns too, no one will want to work with you, because you’ll be out of touch with reality.

Now, obviously there’s a point where someone is making too many mistakes. And that brings us to the second issue, which is how to handle it when that happens.

Right now, you’re reacting very emotionally: you’re getting angry and having heated conversations. There should rarely be any need for that at work, and by doing it, you’re almost certainly alienating people and making no one want to work with you. That’s a big deal — not only are you making working with you a bad experience for other people, but you’re also impacting your own professional reputation. That will matter when you’re looking for a promotion, a raise, or a new job, or even just when you want to be included on something that other people don’t want to work with you on.

Here’s the thing that you’re losing sight of: At work, you have the tools you need to solve problems calmly and rationally. Getting angry and emotional says to other people that you don’t know how to do that. It makes you look out of control, and it can make you look inept. You don’t want that.

Your goal needs to be to solve the problem, not to punish people or let them know how wrong they are or how much they frustrated you. Instead of having a heated reaction, you just need to deliver information calmly and clearly.

That means that if someone makes a single mistake, all you need to do is say something like this: “I found mistake X. Can you take a look at it and fix it for me today?” If relevant, you can add, “Let me know if you’re not clear on what I’m talking about and I can walk you through it” and/or “Can you figure out how that happened so we can make sure to avoid it in future rounds?”

And if someone makes mistakes regularly, that’s a pattern you need to talk to their manager about, since their manager is responsible for addressing it. And that should be a calm, matter-of-fact conversation — as in “Fergus is regularly making mistakes like X and Y. I’ve pointed it out to him, but it’s continuing to happen and I’m concerned about the pattern. It’s causing me to have to redo his work and making me reluctant to keep him on the project.”

But there’s almost no reason to ever have a heated conversation over a mistake. This stuff shouldn’t be so emotional.

If you find that you can’t control your emotions about mistakes, it’s probably worth exploring with a competent therapist — because a pattern of strong negative reactions to something that doesn’t warrant that intensity is usually connected to something more deeply rooted, and likely isn’t about work at all.

Dear Captain Awkward, I’m dating someone wonderful who really loves me, he (IT’S ALWAYS HE, DON’T @ME) but he has terrible political views, like, he thinks immigrants and black people and women and gay people and trans people aren’t really people something something about biological inferiority and it’s okay to violence them but only when they deserve it? I know it’s just how he grew up, he has a good heart and doesn’t really mean it, Confederate flags/”traditional” views are just part of his heritage. I’ve tried discussing this with him but he always talks over me. Can you help me explain my views better? I’m sure I can convince him if I just try hard enough? Can this relationship work?

(Nazi flags and Confederate flags are best buds they like to go drinking together and talk about wars they got their asses kicked in and remember the good old days of being giant fucking racist losers.)

The heart wants what it wants but I gotta ask what would it take for you to break up with a dude who talks about “many sides” and “yeah but free speech is important” and “we can’t waste time with identity politics” right about now? I guarantee some of those tiki torch Connors and Trents and Wyatts are going home to cuddles and pie tonight. Maybe with you.

I know how you got here even if you don’t. They know how to hide this stuff in “polite” company and save the nastiness for anonymous forums. They use dog whistles. They make jokes that aren’t jokes. They play the Devil’s advocate. They say ridiculous things on purpose so that you can think to yourself “He can’t really believe that, can he?” They trick you with occasional actual orgasms and doing their fair share of the dishes and decent hygiene and god, you were alone for so long, and you finally found someone who is not repulsive in the shallow dating pool where you live, do you really have to dump this living, breathing human being who likes the same geeky stuff you like and who holds doors open for your mom and who probably is just doing his best, all to prove some abstract point? How can these people know better if no one will teach them how to be better? Can’t that be you, and in return you get to keep this nice boyfriend who smells good and who has a decent job and who and checks all of your other “don’t be a giant racist turd” boxes? There’s good in him, you’ve felt it, surely this can be fixed?

They wait until they’ve charmed you, until they’ve met your parents, until things are all comfortable between you, to show their true colors, betting on the fact that you’d be too far in to leave.

I know you’re embarrassed and it’s embarrassing as fuck but it’s not too late to get out of there. I know it’s not fair. Cut. Your. Losses.

I’m not making fun. I am deadly serious. It is only getting worse. At least one person died today behind this. We can’t lose you, too. Make a safety plan. Go quietly, but go.

This email arrived in my inbox more than a week after I was supposed to be notified of this organization’s decision in hiring a higher-level volunteer position. The first sentence is the only one that seemed personally written for me.

I feel like I should respond politely but I’m angry that in all of this poetry they never state outright that they went with someone else nor that they are rejecting me. It’s so much language but completely indirect. Like, we decided? We decided what?!? I mean, I’m not obtuse so I know what they decided (also because they actually sent out an announcement about what they decided to all the org’s members).

Am I wrong to feel insulted by this form letter? Can I respond in a way that makes it clear that I don’t appreciate the mass message and the lack of directness, or is that just a no-go? Here’s the message:

Hi!

Thank you so much for your application and time on our call!

On the bright side, you can wake to birds chirping. Not beeping texts. On the bright side, you can stay out longer. Instead of going home early to get on a call. On the bright side, you can have a bit more time to do other stuff — for your Org family!

In our search to revamp Org’s XYZ Program, we had a plethora of applicants and love. That made it competitive.After much talking, texting, emailing and thought, we decided.

The bottom line: on the bright side, you have more time. And we thank you so much for your desire to serve.

But wait don’t go!There are many other ways to be an awesome, active Org’er.

Reach out to your chapter president and board leaders.You could organize a city event. That could be as easy as a drinks social on a Friday night.That could be a professional speaker spotlight with someone you’ve been wanting to meet – for a selfie or possible job.That could be a big fundraiser, like New York’s trivia bowl.

Three basic sentences is all it takes — some variation of this: “Thanks so much for your interest in the X role and the time you spent talking with us. The hiring process has been very competitive and after much consideration, we’ve decided to offer the position to another candidate. But we’re really grateful for your interest and wish you all the best in whatever comes next for you.”

Saying “on the bright side, you have more time” in place of a direct rejection is kind of awful. And yet it sounds like they genuinely thought it would be a nice message, so someone there is very, very tone-deaf.

Anyway, since this was a volunteer position and they’re trying to encourage you to stay involved with the organization, I do think you have room to say something to them about it. I wouldn’t complain about it being a mass mailing — form letters are really normal with rejections — but you could say something like this: “I appreciate you notifying me of your decision. For what it’s worth, I’d strongly prefer a straightforward rejection. This message felt pretty indirect, and even a bit patronizing. I wanted to mention it since I support your work and thought it might be useful feedback to have for future rejection letters. Thanks again for talking with me, and I look forward to staying involved with the organization in other ways.”

Edit: The following day, I loaded another set of passwords which has brought this up to 320M. More on why later on.

Last week I wrote about Passwords Evolved: Authentication Guidance for the Modern Era with the aim of helping those building services which require authentication to move into the modern era of how we think about protecting accounts. In that post, I talked about NIST's Digital Identity Guidelines which were recently released. Of particular interest to me was the section advising organisations to block subscribers from using passwords that have previously appeared in a data breach. Here's the full excerpt from the authentication & lifecycle management doc (CSP is "Credential Service Provider"):

NIST isn't mincing words here, in fact they're quite clearly saying that you shouldn't be allowing people to use a password that's been breached before, among other types of passwords they shouldn't be using. The reasons for this should be obvious but just in case you're not fully aware of the risks, have a read of my recent post on password reuse, credential stuffing and another billion records in Have I been pwned (HIBP). As I read NIST's guidance, I realised I was in a unique position to help do something about the problem they're trying to address due to the volume of data I've obtained in running HIBP. Others picked up on this too:

It would be exceptionally helpful if @troyhunt could share anonymized passwords for this purpose.

This blog post introduces a new service I call "Pwned Passwords", gives you guidance on how to use it and ultimately, provides you with 306 million passwords you can download for free and use to protect your own systems. If you're impatient you can go and play with it right now, otherwise let me explain what I've created.

Where Are the Passwords From?

Before I go any further, I've always been pretty clear about not redistributing data from breaches and this doesn't change that one little bit. I'll get into the nuances of that shortly but I wanted to make it crystal clear up front: I'm providing this data in a way that will not disadvantage those who used the passwords I'm providing. As such, they're not in clear text and whilst I appreciate that will mean some use cases aren't feasible, protecting the individuals still using these passwords is the first priority.

I've aggregated these passwords from a variety of different sources, starting with the massive combo lists I wrote about in May. These contain all the sorts of terrible passwords you'd expect from real world examples and you can read an analysis in BinaryEdge's post on how users are choosing their passwords on the internet. I began with the Exploit.in list which has 805,499,391 rows of email address and plain text password pairs. That actually "only" had 593,427,119 unique email addresses in it so what we're seeing here is a heap of email accounts with more than one password. This is the reality of these combo lists: they're often providing multiple different alternate passwords which could be used to break into the one account.

I grabbed the passwords from the Exploit.in list which gave me 197,602,390 unique values. Think about this for a moment: 75% of the passwords in that one data set had been used more than once. This is really important as it starts to put shape around the scale of the problem we're facing.

I moved on to the Anti Public list which contained 562,077,488 rows with 457,962,538 unique email addresses. This gave me a further 96,684,629 unique passwords not already in the Exploit.in data. Looking at it the other way, 83% of the passwords in that set had already been seen before. This is entirely expected: as more data is added, a smaller proportion of the passwords are previously unseen.

From there, I moved through a variety of other data sources adding more and more passwords albeit with a steadily decreasing rate of new ones appearing. I was adding sources with tens of millions of passwords and finding "only" a 6-figure number of new ones. Whilst you could say that the data I'm providing is largely comprised of those two combo lists, you could also say that once you have hundreds of millions of passwords, new data breaches are simply not turning up too much stuff we haven't already seen. (Keep that last point in mind for when I later talk about updates.)

When I was finished, there were 306,259,512 unique Pwned Passwords in the set. Let's talk about how you can now use them.

Edit: And then I added another 13,675,934 the following day to bring the total to 319,935,446 (let's just call it 320 million). Whilst this increase is only 4%, it's important because the initial processing I performed caused only one version of multiple passwords with different cases to be loaded. For example, "p@55w0rd" was loaded but not "P@55w0rd" with a capital "p". I'll explain these concepts in full shortly, but the online system is now properly case sensitive and the downloadable passwords have their first incremental update so you'll see both the initial 306 million plus "Update 1".

Checking Passwords Online

For quite some time now, I've had suggestions along the lines of that earlier tweet saying "you should build a service for websites to check passwords against when customers sign up". I want to explain why this is a bad idea, why I've done it anyway and why that's not how you should use the service.

To the first point, there is now a link on the nav of HIBP titled Passwords. On that page, there's a search box where you can enter a password and it will tell you if it exists on the service. For example, if you test the password "p@55w0rd":

It goes without saying (although I say it anyway on that page), but don't enter a password you currently use into any third-party service like this! I don't explicitly log them and I'm a trustworthy guy but yeah, don't. The point of the web-based service is so that people who have been guilty of using sloppy passwords have a means of independent verification that it's not one they should be using any more. Mind you, someone could actually have an exceptionally good password but if the website stored it in plain text then leaked it, that password has still been "burned".

If a password is not found in the Pwned Passwords set, it'll result in a response like this:

My hope is that an easily accessible online service like this also partially addresses the age-old request I've had to provide email address and password pairs; if the password alone comes back with a hit on this service, that's a very good reason to no longer use it regardless of whose account it originally appeared against.

As well people checking passwords they themselves may have used, I'm envisaging more tech-savvy people using this service to demonstrate a point to friends, relatives and co-workers: "you see, this password has been breached before, don't use it!" If this one thing I've learned over the years of running this service, it's that nothing hits home like seeing your own data pwned.

To give people more options, they can also search for a SHA1 hash of the password. Taking the password "p@55w0rd" example from earlier on, a search for "ce0b2b771f7d468c0141918daea704e0e5ad45db" (the hash itself is not case sensitive so "CE0B..." is fine too) yields the same result:

The service auto-detects SHA1 hashes in the web UI so if your actual password was a SHA1 hash, that's not going to work for you. This is where you need the API which is per the existing APIs on the service, is fully documented. Using this you can perform a search as follows:

And as for that "but the actual password I want to search for is a SHA1 hash" scenario, you can always call the API as follows:

GET https://haveibeenpwned.com/api/v2/pwnedpassword/ce0b2b771f7d468c0141918daea704e0e5ad45db?originalPasswordIsAHash=true

That will actually return a 404 as nobody used the hash of "p@55w0rd" as their actual password (at least if they did, it hasn't appeared in plain text or was readily crackable). There's no response body when hitting the API, just 404 when the password isn't found and 200 when it is, for example when just searching for "p@55w0rd" via its hash:

GET https://haveibeenpwned.com/api/v2/pwnedpassword/ce0b2b771f7d468c0141918daea704e0e5ad45db

Just like the other APIs on HIBP, the Pwned Passwords service fully supports CORS so if you really did want to integrate it into a web front end somewhere, you can (I suggest sending only a SHA1 hash if you want to do that, at least it's some additional protection). Also like the other APIs, it's rate limited to one request every 1,500ms per IP address. This is heaps for legitimate web-based use cases.

One quick caveat on the search feature: absence of evidence is not evidence of absence or in other words, just because a password doesn't return a hit doesn't mean it hasn't been previously exposed. For example, the password I used on Dropbox is out there as a bcrypt hash and given it's a randomly generated string out of 1Password, it's simply not getting cracked. I say this because some people will inevitably say "I was in the XX breach and used YY password but your service doesn't say it was pwned". Now you know why!

So that's the online option but again, don't use this for anything important in terms of actual passwords, there's a much better way.

Checking Passwords Offline

The entire collection of 306 million hashed passwords can be directly downloaded from the Pwned Passwords page. It's a single 7-Zip file that's 5.3GB which you can then download and extract into whatever data structure you want to work with (it's 11.9GB once expanded). This allows you to use the passwords in whatever fashion you see fit and I'll give you a few sample scenarios in a moment.

Providing data in this fashion wasn't easy, primarily due to the size of the zip file. Actually, let me rephrase that: it wouldn't be easy if I wanted to do it without spending a heap for other people to download the data! I asked for some advice on this whilst preparing the service:

What's a cheap way of hosting a 6GB file for a heap of people to download? Don't want to torrent and don't mind paying a *little*

There were lots of well-intentioned suggestions which wouldn't fly. For example, Dropbox and OneDrive aren't intended for sharing files with a large audience and they'll pull your ability to do so if you try (believe me). Hosting models which require me to administer a server are also out as that's a bunch of other responsibility I'm unwilling to take on. Lots of people pointed to file hosting models where the storage was cheap but then the bandwidth stung so those were out too. Backblaze's B2 was the most cost effective but at 2c a GB for downloads, I could easily see myself paying north of a thousand dollars over time. Amazon has got a neat Requestor Pays Feature but as soon as there's a cost - any cost - there's a barrier to entry. In fact, both this model and torrenting it were out because they make access to data harder; many organisations block torrents (for obvious reasons) and I know, for example, that either of these options would have posed insurmountable hurdles at my previous employment. (Actually, I probably would have ended up just paying for it myself due to the procurement challenges of even a single-digit dollar amount, but let's not get me started on that!)

After that tweet, I got several offers of support which was awesome given it wasn't even clear what I was doing! One of those offers came from Cloudflare who I've written about many times before. I'm a big supporter of what they do for all the sorts of reasons mentioned in those posts, plus their offer of support would mean the data would be aggressively cached in their 115 edge nodes around the world. What this means over and above simple hosting of the files itself is that downloads should be super fast for everyone because it's always being served from somewhere very close to them. The source file actually sits in Azure blob storage but regardless of how many times you guys download it, I'll only see a few requests a month at most. So big thanks to Cloudflare for not just making this possible in the first place, but for making it a better experience for everyone.

So that's the data and where to get it, let's now talk about the hashes.

Why Hashes?

Sometimes passwords are personally identifiable. Either they contain personal info (such as kids' names and birthdays) or they can even be email addresses. One of the most common password hints in the Adobe data breach (remember, they leaked hints in clear text), was "email" so you see the challenge here.

Further to that, if I did provide all the passwords in clear text fashion then it opens up the risk of them being used as a source to potentially brute force accounts. Yes, some people will be able to sniff out the sources of a large number of them in plain text if they really want to, but as with my views on protecting data breaches themselves, I don't want to be the channel by which this data is spread further in a way that can do harm. I'm hashing them out of "an abundance of caution" and besides, for the use cases I'm going to talk about shortly, they don't need to be in plain text format anyway.

Each of the 306 million passwords is being provided as a SHA1 hash. What this means is that anyone using this data can take a plain text password from their end (for example during registration, password change or at login), hash it with SHA1 and see if it's previously been leaked. It doesn't matter that SHA1 is a fast algorithm unsuitable for storing your customers' passwords with because that's not what we're doing here, it's simply about ensuring the source passwords are not immediately visible.

Also, just a quick note on the hashes: I processed all the passwords in a SQL Server DB then dumped out the hashes using the HASHBYTES function which represents them in uppercase. If you're comparing these to hashes on your end, make sure you either generate your hashes in uppercase or do a case insensitive comparison.

Let's go through a few different use cases of how I'm hoping this data can be employed to do good things.

Use Case 1: Registration

At the point of registration, the user-provided password can be checked against the Pwned Passwords list. If a match is found, there are 2 likely explanations for what's happened:

This is a password the user has previously used and it has been pwned in a data breach. It may even be a very good password strength wise, but it should now be considered "burned".

This is a password someone else has used and it has been pwned in a data beach. This is almost certainly a poor password choice as someone else has independently chosen the same string of characters.

Both scenarios ultimately mean the same thing - the password has previously been used, exposed and is circulating amongst nefarious parties with criminal intent. Let's go back to NIST's advice for a moment in terms of how to handle this:

If the chosen secret is found in the list, the CSP or verifier SHALL advise the subscriber that they need to select a different secret, SHALL provide the reason for rejection, and SHALL require the subscriber to choose a different value.

This is one possible path to take in that you simply reject the registration and ask the user to create another password. Per NIST's guidance though, do explain why the password has been rejected:

This has a usability impact. From a purely "secure all the things" standpoint, you should absolutely take the above approach but there will inevitably be organisations that are reluctant to potentially lose the registration as a result of pushing back. I also suggest having an easily accessible link to explain why the password has been rejected. You and I know what a data breach is but it's a foreign world to many other people so some language the masses can understand (including why it's in their own best interests) is highly recommended.

A middle ground would be to recommend the user create a new password without necessarily enforcing this action. The obvious risk is that the user clicks through the warning and proceeds with using a compromised password, but at least you've given them the opportunity to improve their security profile.

There should not be a "one size fits all" approach here. Consider the risk in the context of what it is you're protecting and whilst that means that yes, there are cases where you certainly shouldn't allow the passwords, there are also cases where the damage would be much less and some more leeway might be granted.

Use Case 2: Password Change

Think back to that earlier NIST guidance:

When processing requests to establish and change memorized secrets

Password change is important as it obviously presents another opportunity for users to make good (or bad) decisions. But it's a little different to registration for a couple of reasons. One reason is that it presents an opportunity to do the following:

Here you can do some social good; we know how much passwords are reused and the reality of it is that if they've been using that password on one service, they've probably been using it on others too. Giving people a heads up that even an outgoing password was a poor choice may well help save them from grief on a totally unrelated website.

Clearly, the new password should also be checked against the list and as per the previous use case at registration, you could either block a Pwned Password entirely or ask the user if they're sure they want to proceed. However, in this use case I'd be more inclined to err towards blocking it simply because by now, the user is already a customer. The argument of "let's not do anything to jeopardise signups" is no longer valid and whilst I'd be hesitant to say "always block Pwned Passwords at change", I'd be more inclined to do it here than anywhere else.

Use Case 3: Login

Many systems will already have large databases of users. Many of them have made poor password choices stretching all the way back to registration, an event that potentially occurred many years ago. Whilst that password remains in use, anyone using it faces a heightened risk of account takeover which means doing something like this makes a lot of sense:

I suggest being very clear that there has not been a security incident on the site they're logging into and that the password was exposed via a totally unrelated site. You wouldn't need to do this every single time someone logs in, just the first time since implementing the feature after which you could flag the account as checked and not do so again. You'd definitely want to make sure this is an expeditious process too; 306 million records in a poorly indexed database with many people simultaneously logging on wouldn't make for a happy user experience! An approach as I've taken with Azure Table Storage would be ideal in that it's very fast (single digit ms), very scalable and very cost effective.

Other Use Cases

I'm sure clever people will come up with other ways of using this data. Perhaps, for example, a Pwned Password is only allowed if multi-step verification is enabled. Maybe there are certain features of the service that are not available if the password has a hit on the pwned list. Or consider whether you could even provide an incentive if the user proactively opts to change a Pwned Password after being prompted, for example the way MailChimp provide an incentive to enabled 2FA:

The thing about protecting people in this fashion is that it doesn't just reduce the risk of bad things happening to them, it also reduces the burden on the organisation holding credentials that have already been compromised. Increasingly, services are becoming more and more aware of this value and I'm seeing instances of this every day. This one just last week from Spirit Airlines, for example:

I particularly like the way they mention HIBP :) In fact, this approach was quite well-received and they got themselves a writeup on Gizmodo for their efforts. So you can see the point I'm making: increasingly, organisations are using breached data to do good things whether that be from mining data breaches directly themselves, monitoring for email address exposure (a number of organisations actually use HIBP commercially to do this), or as I hope, downloading these 306 million Pwned Passwords and stopping them from doing any more harm.

If you have other ideas on how to use this data and particularly if you use it in the way I'm hoping organisations do, please leave a comment below. My genuine hope is that this initiative helps drive positive change but given the way it'll be downloaded and used, I'll have no direct visibility into its uses so I'm relying on people to let me know.

Augment Pwned Passwords with Other Approaches

The 306 million passwords in this list obviously represents a really comprehensive set of strings that shouldn't be used as passwords, but it's not exhaustive and nor can it ever be. For example, the earlier screen cap from NIST also says that you shouldn't allow the following:

Context-specific words, such as the name of the service, the username, and derivatives thereof

If your service is called "Jim's Drone Hire", you shouldn't allow a password of JimsDroneHire. Or J1m5Dr0n3H1r3. Or any other combination people may try. They won't be in the list of Pwned Passwords but you still shouldn't allow them.

You also should still use implementations such as Dropbox's zxcvbn. This includes 47k common passwords and runs client side so it can give immediate feedback as people are entering a password. Every one of those passwords is also included in the Pwned Passwords list so the server side validation is already covered if you're using the list I've provided here. (Incidentally, more than 99% of them had already appeared in data breaches loaded into the Pwned Passwords list.)

Updates, Attribution and Donations

As for updates, when a "significant" volume of new passwords becomes available I'll update the data. I'm not putting a number on what "significant" constitutes (I'll cross that bridge when I get to it), and it will likely be provided as a delta that can be easily added to the existing data set. But the reality is that 306 million passwords already represents a huge portion of the passwords people regularly use, a fact that was made abundantly clear as I built out the data set and found a decreasing number of new passwords not already in the master list.

In terms of attribution, you're free to use the Pwned Passwords without identifying HIBP as the source, simply because I want to remove every possible barrier to use. As I mentioned earlier, I know how corporate environments in particular can put up barriers around the most inane things and I don't want the legal department to stop something that's in everybody's best interests. Of course, I'm happy if you do want to attribute HIBP as the source of the data, but you're under no obligation to do so.

As I mentioned earlier, I've been able to host and provide this data for free courtesy of Cloudflare. There's (almost) no cost to me to host it, none to distribute it and indeed none to acquire it in the first place (I have a policy of never paying for data - the last thing we need is people being financially incentivised to hack websites). The only cost to me has been time and I've already got a great donation page on HIBP if you'd like to contribute towards that by buying me a coffee or some beer. I'm enormously grateful to those who do :)

Summary

There will be those within organisations that won't be too keen on the approaches above due to the friction it presents to some users. I've written before about the attitude of people with titles like "Marketing Manager" where there can be a myopic focus on usability whilst serious security incidents remain "a hypothetical risk". If you're wearing the same shoes as I have so many times before where you're trying to make yourself heard and do what you ultimately believe is in the organisation's best interests, let me give you a couple of suggestions:

Offer to "downsample" the users you apply this to over a trial period. For example, take just 10% of the logins, check them against Pwned Passwords and show the prompt I suggest above then measure the behaviour (i.e. how many then change their passwords).

Just passively collect data in a "phase one" approach. See how many of the registrations, password changes and logins match the Pwned Passwords list and collect aggregated stats (no, don't log the password itself!) Use this data to then have an evidence-based discussion about the risk to the organisation.

Use this data to do good things. Take it as an opportunity to not just reduce the risk to the service you're involved in running, but also to help make people aware of the broader risks they face due to their password management practices. When someone gets a "hit" on a Pwned Password, help them understand the broader risk profile and what this means to their personal security. One thing that's really hit home while running HIBP is that few things resonate with people like demonstrating that they've been pwned. I can do that with those who come to the site and enter their email address but by providing these 306 million Pwned Passwords, my hope is that with your help, I can distribute that "lightbulb moment" out to a far greater breadth of people.

this API is a big mistake. sending unsalted password hashes across the internet? and that's if you don't take the lazy way and send plaintext password hashes. and sure it's supposed to be https encrypted, but it'll transparently work if you start with a http: url if you really, REALLY want to send your users hashes across the internet for all to see.

In the comments on a post from last week, several commenters shared stories about coworkers misusing words in hilarious ways. Here are some examples (and, uh, I just realized all of these are adult in nature, so consider yourself warned):

“A coworker of a friend used the word ‘vajazzled’ about an upcoming meeting, thinking she was saying jazzed or something. People told her what it meant immediately, and she was immediately mortified but 100% glad people said something quickly or she would have continued to use it.”

“My dad worked on Obama’s campaign. He legit thought that ‘teabagger’ was the correct term used for someone in the teaparty. He didn’t know it was offensive. He didn’t know what teabagging was. I had to explain teabagging to my elderly father.” (Okay, this isn’t a coworker, but it must be included.)

“When working for a video game company I had to explain to a very kind older sounding woman why she couldn’t use an abbreviation of her name as a character name. It was something similar to Camila Townsend, and she had shortened it to ‘Cameltoe.’ The sheer level of awkward of having to explain that one.”

I know there are more stories like this out there, and I know the world will benefit from hearing them. Please share in the comments, and entertain us all!

I’m having a really hectic week this week, with preparing for a move (actually, most of it is done now, but I write posts for the week on Monday). So some posts this week will be reprints from years ago. This one is from October 2014.

* * *

I get a lot of letters that ask, essentially, can my employer really do this?

I work in a industry where I sometimes work in the evenings after my standard 8 hours. I don’t mind at all, because it’s good money. Now to avoid paying overtime, my employer is telling me that I have to shift my hours. In other words, I have to come in late to work, then work into the evening to equal 8 hours with no overtime. Can they do this? This is not what I signed up for.

Here’s another:

I am an hourly worker for a company with 7 branches. My position is being terminated because customer service in the 7 branches is being centralized to the home office. I was copied on an email to my branch manager that I am going to be required to travel to St. Pete to train the CSR’s there ( my replacements) The company is paying my hotel, meals and gas. Can I be fired if I refuse to go? Especially if I have a doctor appointment scheduled during that time period?

The answer to both of these letters — and so many others — is: Yes, your employer can do that, but they might end up handling it differently if you have a calm conversation with them explaining your concerns. Maybe not, of course, but many, many employers in many, many situations do respond to that.

So the relevant question in situations like these isn’t just “Is this legally allowed?” but also “Is there a way to address this that could produce a change?”

To be clear, laws matter. It’s important to know if your employer is doing something prohibited by law. But the majority of the time I hear this question, (a) what the employer is doing is perfectly legal, and (b) that’s not the starting place that’s going to get you the best results anyway. When you’re upset about something your employer is doing, it often makes sense to start by having a calm conversation with your manager where you explain what you’re concerned about and why.

It sounds like this:

“I wanted to talk to you about your request that I do X. I understand why you’re asking — it’s because Y. But to be honest, Z was one of the reasons I took the job — it’s important to me, and X would be a real drawback for me. Is there any chance of revisiting the plan?”

Or:

“Doing X would cause me real hardship because of Y. Are there any other options?”

Or:

“I understand why you want me to do X, but I’m concerned about Y. Could we take a look at other ways to approach it?”

In many cases, that’s all that it will take to get a different answer. Of course, other times it won’t work — but that conversation is where you should start, unless your employer has already given you compelling reasons to skip that step.

(And for cases where what your employer is doing or proposing doing is actually illegal, there’s information here and here.)