We'll reply sooner with more details but this isn't a new security issue, it basically confirms that once the system is compromised, the game is over. You still need actual access to the system's memory to read these data. So, using a password manager isn't enough, you still have to practice security habits of locking down your system, keeping it up to date and more.

I am Jeffrey Goldberg, the Chief Defender Against the Dark Arts at 1Password. I thank Geoffrey Fowler for reaching out to us and for discussing the need for using password managers. And it is definitely correct to explore and probe the security of password managers themselves.

As I attempted to explain in my "lengthy emails" this is a frequently discussed issue. (It seems that my counterparts at KeePass (correctly) called it "old news." Because this is something that has been publicly discussed many times before, we did not seek to enforce the bug bounty non-disclosure rules in this case.) In that exchange, I attempted to explain why any plausible cure may be worse than the disease. Fixing the particular problem introduces new security risks, and the on balance we have chosen stick with the security afforded by high level memory management, even if it means that we cannot clear memory instantly.

Keep in mind that the realistic threat from this issue is limited. An attacker who is in a position to exploit this information in memory is already in a very powerful position. No password manager (or anything else) can promise to run securely on a compromised computer. But still, other things being equal, it would still be nice clear secrets from memory as soon as they are no longer need. The difficulty is that "other things" aren't equal. As I mentioned there are security gains in programming in a way that has the side effect of limiting our ability to clear memory instantly.

Long term, we may not need to make such a tradeoff. But given the tools at our disposal, we have had to make a decision, and it is one that I stand by. We are not going to return to the bad old days of corrupted program memory.

Security questions rarely have simple responses (thus leading to my "lengthy email"). And security designs often involve security/security tradeoffs that require reasoned consideration of risks. I hope that readers will see that we do engage in that process.

Is the cure worse than the disease?

In my comment on the Washington Post, which @mikeT quoted above, I said a "cure may be worse than the disease."

First (though I will expand on this later), our inability to instruct the computer running 1Password to "go to the memory address where this string is stored and zero it out" is a side effect of using memory safe development practices. Our choice to use tools that dramatically improve memory safety means that we forego the freedom to tell the system to zero out memory holding a string.

On memory safety

As implied above, we find the inability to clear things from memory at exactly the time that we wish to is a small price to pay for improved memory safety. So what is memory safety? The simple (and not very informative) answer is that memory safety means being protected from doing memory unsafe things. Heartbleed (remember that?) was a bug due to an unsafe memory action. And it is far from the only one. Any time you'd see a "General Protection Fault" on older Windows systems it is because program running on the system had a memory bug. These sorts of bugs are extremely easy to make.

This type of bug (and there are subtypes) are both easy to make and lead to security vulnerabilities. From the same talk, here is a breakdown of the types of memory error that resulted in vulnerabilities.

Memory safe languages

The response has been to introduce programming languages that make it harder for programmers to do unsafe things. Instead of having fairly low level access to allocation, releasing, and manipulation of memory, the safe programing languages handle that for you. But as a consequence you can't just tell the program to "overwrite the contents of the memory that this string is stored at, and then release it." That kind of thing is easy to do in C, but everything that I loved about C in my youth is stuff that I hate about C from a security perspective today.

Now there are different ways that programming languages (and their run times) can do the memory management for you. There is garbage collection (C# and Go use garbage collectors), there is automated reference counting (Objective-C and Swift use ARC), and there are subtle difference in how each works. We develop 1Password in modern languages that offer significant amounts of memory safety. (Languages still differ in how strict they are in enforcing memory safety and other safety issues, but for the moment the difference among them don't matter.)

I'd love to say that 1Password never crashes. It still can. But using a memory safe language means that those sorts of bugs are far less common and that those bugs can't be turned into security exploits the way that they could if programming in an unsafe language.

Immutable strings

They also introduce other memory safety features beyond garbage collection and reference counting. They make certain sorts of data "immutable". Immutable data can't be accidentally (or deliberately) over-written when the programming is running. This offers a great deal of protection from (security) bugs. So unless you need to change some something that you put into the computer's memory during the running of the program, your data will be put in a chunk of memory that the program will consider immutable. This is not only a safety feature, but it allows the program to build and run more quickly. Only if you need to change the data in place (instead of just making a modified copy of it) do you need to say that the memory for it should be mutable.

So the obvious question (well, obvious to some) is why don't we put the Master Password into mutable memory? That way we could replace it with zeros or random data longer need it.

The operating system's Software Development Kit (SDK) offers you tools to get input from what someone might type or paste into a field. And the SDKs even offer features to say that "the data entered into this place should be treated like a password". This buys a great deal of security. For one thing, it makes it harder for other processes to read what is typed in. Suppose that you use something like TextExpander to automatically expand abbreviations. For example, I have it set up so that I just need to type wpurl to example to the URL for our security white paper: https://1password.com/teams/white-paper/ That saves me typing. But for a tool like TextExpander to do what it does, it needs to read everything I type anywhere on my computer. The one place that TextExpander can't read what I type is when I type into a password field. And this is because the SDK knows to handle those differently and take care of that (and other security things.)

But the SDK also just gives us an immutable string from those fields. It provides us (and you) with substantial safety in other respects, but it does mean that we cannot zero out the memory that the Master Password is stored in once we don't need it any longer. We can tell the system that we no longer need that data (well, actually we can do things that will let the garbage collector or reference counter) figure out that we no longer need that data, but we can't force the data from being removed from memory.

Cures?

So one "cure" for the reported problem would be to not use the tools provided by the SDK for getting user input. We would have to write our own, and try to build in all of the safety and other features and protections the SDK offers us. Doing so would add enormous complexity to the lowest levels of 1Password's code, be extremely prone to error, and would not benefit from improvements to the SDK over time. It is really untenable. It would be like writing a solid chunk of operating system ourselves.

The other "cure" would be to use a language which doesn't offer the memory safety that we would like. Again, the that would just open up the prospect of remarkably easy to make security vulnerabilities. The security gain from removing the Master Password from memory immediately after it is no longer needed is a small gain, but it would come at a large cost for security.

Good response and nice to see you responding to this as I'm sure it's caused quite a bit of concern. What do you think about the seeming regression from 1Password4 to 1Password7 where all password are accessible rather than just one password that was used?

Some of the changes between 1Password 4 and 1Password 7 will have to do with changes in the development language. C# uses a garbage collector, while Delphi used automatic reference counting for more types of objects. Though .NET has its own garbage collector and I'm not sure how that interacts with these.

I should say that my answer is fairly speculative, I (or someone) would need to dive into the 1Password 4 code to take a look at precisely what memory management techniques we're used; and to be honest, I would rather look forward instead of at code that was written so long ago.

One thing that I said in my comment on the Washington Post site was

Long term, we may not need to make such a tradeoff.

A language like Rust may provide us with the best of both worlds. It offers more memory safety than the other languages that we've been using, while it may also allow us the fine control to zero out memory when we say so. Rust uses something that is like ARC, but restricts what internal "owner" of some data can mutate it. This makes it safer to allow at some address to be changed. So you have some of the kinds of control over the contents of memory that you have with C, but without different things competing for the same memory locations.

I'm not making any promises, but our Windows team has been experimenting a little bit with Rust. But one doesn't up and rewrite everything any time a better language comes along. Still, it is nice to see that some of our developers are playing with Rust to get a better sense of what it might be able to do.

On the nature of the threat

As I've been arguing, the actual threat from this long known issue is limited and "fixing it" would be worse than the original problem. But I haven't said much about the actual threat from the reported problem. What I've said was

Keep in mind that the realistic threat from this issue is limited. An attacker who is in a position to exploit this information in memory is already in a very powerful position. No password manager (or anything else) can promise to run securely on a compromised computer. But still, other things being equal, it would still be nice clear secrets from memory as soon as they are no longer need.

Everyone recognizes that a password manager will need to know secrets when it is unlocked. The legitimate concern is what is in the computer's memory after you lock 1Password. And so, other things being equal, it would be nice to clear secrets from memory as soon as they are no longer needed.

Clearing that memory as soon as 1Password locks would defend you against an attacher who

Is able to read 1Password process memory when 1Password is locked.

Is not able to read 1Password memory when 1Password is unlocked.

That is an odd combination of capabilities. An attacker who can read 1Password's memory when 1Password is unlocked doesn't need things to be in memory when 1Password is locked. So clearing things from memory when locking is defending against a very narrow range of attackers.

Still, other things being equal it would be better for what the user sees as "locked" to be locked in a stronger sense. There are good reasons to want to clear secrets from memory as soon as possible, just out of good practice and hygiene, even if the threat it defends against is fairly narrow.

DMA attacks

There is, however, one class of attack that does fit that kind of an attacker. When you close or lock, say, a laptop, you expect it to be relatively safe, but there was a kind of attack, based on Direct Memory Access, a few years ago in which an attacker could just plug in a device to a closed computer and read all of its memory. Here is an excerpt from one of my "lengthy emails" to Geoffrey Fowler when he first asked about this.

In many cases, an attacker who has the ability to read process memory when 1Password is locked will also be in a position to read process memory when 1Password is unlocked. There is one exception: Attacks based on Direct Memory Access. DMA attacks, at one time, would allow an attacker to simply plug in a device to a computer and extract all of its memory. Thus someone could, say, close their laptop thinking everything is locked, while an attacker could just plug something into the laptop and get all of its memory. The good news is that modern operating systems have made such attacks much harder. They deactivate DMA when the computer is locked.

So back in the days when systems were more vulnerable to DMA attacks, we were more concerned about clearing secrets from memory when 1Password locks. DMA attacks were a greater concern for Macs than for machines running Windows, and so that is where we focused our effort. 1Password for Mac is better about releasing secrets when locked than 1Password for Windows. But we would like to continue to make progress with both.

So when we look at this issue, I do worry about things like DMA based attacks It's the exception that illustrates that even a narrow range of attacker isn't necessarily implausible. But the good news is that operating systems have mitigations for DMA attacks. Quite simply, when your desktop is "locked" and you have a password for your account on your computer, then DMA is turned off by the operating system. The details differ between operating systems, but on the whole, the threat of DMA-based attacks has been greatly reduced over the past several years. (Of course you all are keeping your operating systems up to date, right?). And so the need to clear secrets from memory sooner rather than later is far less pressing, and we are back to a very unlikely sort of attacker.

So again, an attacker who could exploit this already has the capability of reading the process memory on your machine. That is a very powerful attacker who has gained substantial control over your machine. There is a saying that once your computer is compromised, it is no longer your computer.

Here is something that I wrote five years ago about 1Password defending (or not) against key loggers. All of the details have changed by the general point remains the same: When there are reasonable steps we can take to keep you safe if you run 1Password on a compromised machine, we will take them. But as we can't really protect you in that situation, we aren't going to go to extraordinary lengths to defend against what nobody can defend against: Watch what you type. It begins with

I have said it before, and I’ll say it again: 1Password and Knox cannot provide complete protection against a compromised operating system. There is a saying (for which I cannot find a source), “Once an attacker has broken into your computer [and obtained root privileges], it is no longer your computer.” So in principle, there is nothing that 1Password can do to protect you if your computer is compromised. In practice, however, there are steps we can and do take which dramatically reduce the chances that some malware running on your computer, particularly keystroke loggers, could capture your Master Password

Again, that is five years old. The details all differ, but the general approach to what we can and can't do to protect you if your own machine is compromised remains the same.

@jpgoldberg Reading the Washington Post (WaPo) article left me with a sense of foreboding. But your answers on memory management and programming languages just left me ashamed that I read so much gloom-and-doom into the WaPo article -- I'm not an expert software developer by any stretch of the imagination but I should have been able to anticipate your answers / examples above and not fallen for what is more or less a sensationalist article.

I do have one rather frivolous comment to make: could you stop saying "other things being equal" so often? You use that phrase at least 4 times in your response. It's so distracting! I find that your statements are equally valid / meaningful if you leave that out (e.g. "But still, [snip], it would still be nice clear secrets from memory as soon as they are no longer need.")

Thanks to @jpgoldberg for his comments on possibly using the Rust language. I've been looking at this language for some time now and I decided to use it for one project as it's the best language presently available for creating memory safe software. The problem I've run into is that software engineers who are used to the ways of C and C++ always start by fighting Rust's borrow checker. There is a non-trivial learning curve to using Rust well, but I hope that this will not dissuade AgileBits from using it - the permanent advantages of improved memory security far outweigh the temporary difficulties learning it.

John, in principle we have the same issue on Mac, but in practice these things play out differently. It appears that these do get cleared from memory faster on the Mac. Some of this can be attributed specific design, some to automatic reference counting versus garbage collection, and some operating system environment.

We were more concerned about this on Mac when DMA attacks were a thing. But fortunately that environment made it easier to make progress on this. But some of the fundamental issues remain. We need to use Apple’s SecureInput for Master Password input, which is an immutable string, but for some reason when we say we no longer need that data it gets cleared from memory relatively quickly. But that is not something we can guarantee in any way.

I have since been informed that SecureInput on Apple devices does everything we want. Sure it gives us an immutable string, but it is actually zeroed in memory as soon as it is freed, and it is written a page in memory that is never written to swap.

SGX, had it panned out for us, would not have addressed this issue. If we need to get a string from user input, it can’t reside solely within the Secure Enclave, as it comes from user input an immutable string and must be handled by system memory before passing it to the enclave. Likewise, if we need to display a password or fill it into a browser it needs to have some existence outside of the enclave.

So the value of SGX would have been for cryptographic keys. That is we would put keys in there (yes, they would have some short term existence outside of the enclave, but would not have to be immutable and they can do their job fro within the enclave).

The reasons we didn’t continue with SGX is that at the time we just weren’t getting enough value for the effort and the build complexity it added. I suspect that we will revisit it or similar technology at some point.

@ftwilson asked (elsewhere) about iOS. To some degree or other, the general problem exists on all platforms. But as I said above when evaluating the actual threat, we need to consider that for this to enable an attack the attacker must

Be in a position to read 1Password process memory when 1Password is locked

Not be in a position to read 1Password process memory when 1Password is unlocked.

Number 1 requires that the attacker has already seriously compromised the device. Number 2 means that that the attacker (who has seriously compromised your device) only has that control at some oddly limited times.

Number 2 is a subtle point, but it if we don't include that condition that the fact that some secrets remain in memory after 1Password locks is irrelevant. We only need to care about secrets in memory after 1Password is locked if both 1 and 2 hold.

Because mobile devices have a better security architecture than desktops in general, it is going to be harder for an attacker to read memory at all on a mobile device than on a desktop. And an attacker who can do that can almost certainly break the system in other ways. So this issue poses only a small threat on desktops and a far smaller one on mobile devices.

@DMeans: Confusingly (or misleadingly, depending on your perspective), those seem to be cribbing from this recent Washington Post article, which isn't news, and doesn't apply specifically to 1Password, or to password managers in general, but rather most modern software using high-level languages. The OS handles memory management in these cases. It's worth pointing out that malware which is already in your system and has the ability to read memory could just as easily collect data as you access it, without having to access memory at all, so it's a bit of a red herring.

Just as you've missed the point in multifactor authentication, you're missing the point here.

And besides that, it is absolutely feasible to scrub memory.

I've been practicing information security for over 25 years, in implementation, software development, and as a Security Architect. These days, I'm doing AppSec in the Financial Technology sector.

In the FinTech world, it is a regulatory requirement to scrub and clean memory after secrets are accessed and used (read passwords and keys). Not only do FinTech applications routinely comply and conform to that requirement (at least, mine do), we are tested and certified by qualified 3rd parties. Not only do we scrub memory in Windows, Linux and Unix, we even have Delphi and Mobile apps that do it.

Applications that do not comply with that standard, are routinely hacked (which is why your credit/debit card numbers are for sale on the Darkweb).

Step up your game. Attend some BSides conferences, or perhaps a BlackHat/DEFCON conference or two, and learn what is actually feasible in the world of Dark Arts.