On the surface, the inadvisability of security through obscurity is directly at odds with the concept of shared secrets (i.e. "passwords"). Which is to say: if secrecy around passwords is valuable, then by extension surely it must be of some value to keep the algorithm that uses the password secret as well. Many (arguably misguided) organizations may even go as far as to say that the system is the password.

But to what degree can the secrecy of an algorithm be relied upon? Or perhaps more appropriately, in what way is secrecy surrounding an algorithm doomed to fail while secrecy surrounding passwords is not?

If, on the other hand, secrecy of an algorithm is desirable, then how important is it? To what lengths should a developer reasonably go to keep his crypto secret?

EDIT
To be clear, this isn't about creating a new, untested algorithm, but rather keeping secret the details of which algorithm you choose. For example, the technique Windows uses to hash passwords is to apply known hashing algorithms in a sequence which is not published and which appears to change between different versions of Windows.

A password can be anything, random, no rules, hence unpredictable. While An algorithm ultimately has to be understood by a CPU whose architecture/instruction set is probably out there in public. And whatever such a CPU can understand, can surely be translated back to an Algorithm.
–
user117Nov 24 '12 at 15:34

1

Watch the new James Bond. Security by obscurity is a good thing!11 (And apparently it entails that your data changes when somebody tries to look at it.)
–
Konrad RudolphNov 25 '12 at 20:15

@KonradRudolph I don't always use street names to encrypt my polymorphic obscurity-encrypted data, but when I do, I choose "Grandborough".
–
PolynomialNov 26 '12 at 11:17

Changing the internals of the Windows password hashing scheme (if that is done; I don't know) also prevents software vendors from relying on a particular scheme being used. That might actually be quite worthwhile in itself, even if the exact algorithm was public. Note that anything officially documented is generally considered by contract from the vendor who documents it, and thus cannot easily be changed in the future if there is ever a problem with that design.
–
Michael KjörlingSep 2 '14 at 9:00

5 Answers
5

Much of the work on passwords and keys is related to controlling where they are stored and copied.

A password is stored in the mind of a human user. It is entered on a keyboard (or equivalent) and goes through the registers of a CPU and the RAM of the computer, while it is processed. Unless some awful blunder is done, the password never reaches a permanent storage area like a hard disk.

An algorithm exists as source code somewhere, on the machine of a developer, a source versioning system, and backups. There are design documents, which have been shown to various people (e.g. those who decide whether to fund the development of the system or not), and often neglectfully deposited on an anonymous shelf of in some layer of crust on the typical desktop. More importantly, the algorithm also exists as some executable file on the deployed system itself; binary is not as readable as source code but reverse engineering works nevertheless.

Therefore we cannot reasonably consider that the algorithm is secret, or at least as secret as the password (or the key).

Really, cryptographic methods were split one century ago into the algorithm and the key precisely because of that: in a functioning system, part of the method necessarily leaks traces everywhere. Having a key means concentrating the secrecy in the other half, the part which we can keep secret.

"Security through obscurity" is an expression which uses the term obscurity, not secrecy. Cryptography is about achieving security through secrecy. That's the whole difference: a password can be secret; an algorithm is, at best, obscure. Obscurity is dispelled as soon as some smart guy thinks about bringing a metaphorical lantern. Secrecy is more like a steel safe: to break through it, you need more powerful tools.

Smart guy Auguste Kerckhoffs already wrote it more than a century ago. Despite the invention of the computer and all of today's technology, his findings still apply. It took a while for practitioners of cryptography to learn that lesson; 60 years later, Germans were still putting a great deal in the "secrecy" of the design of the Enigma machine. Note that when Germans put the 4-rotor Navy Enigma into use, Allied cryptographers were inconvenienced (routine cracking stopped for a few months) but were not totally baffled because some captured documents from the preceding year alluded to the development of the new version, with a fourth "reflector" rotor. There you have it: algorithm secrecy could not be achieved in practice.

An additional twist is that algorithm obscurity can harm security. What I explain above is that obscurity cannot be trusted for security: it might increase security, but not by much (and you cannot really know "how much"). It turns out that it can also decrease security. The problem is the following: it is very hard to make a secure cryptographic algorithm. The only known method is to publish the algorithm and wait for the collective wisdom of cryptographers around the world to gnaw at it and reach a conclusion which can be expressed as either "can be broken that way" or "apparently robust". An algorithm is declared "good" only if it resisted the onslaught of dozens or hundreds of competent cryptographers for at least three or four years.

Internet, academic procrastination and human hubris are such that, with the right communications campaign, you can get these few hundreds of cryptographers to do that hard assessing job for free -- provided that you make the algorithm public (and "attractive" in some way). If you want to maintain the algorithm obscure, then you cannot benefit from such free consulting. Instead, you have to pay. Twenty good cryptographers for, say, two years of effort: we are talking about millions of dollars, here. Nobody does that, that's way too expensive. Correspondingly, obscure algorithms are invariably much less stress-tested than public algorithms, and therefore less secure.

(Note the fine print: security is not only about not being broken, but also about having some a priori knowledge that breaches won't happen. I want to be able to sleep at night.)

Summary:

You should not keep your algorithm secret.

You do not know how much your algorithm is secret.

You cannot keep your algorithm secret.

But you can and must keep your password secret, and you can know "how much" secret it is (that's all the "entropy" business).

Security through Obscurity being in opposition to Security through Secrecy is probably the single most insightfully succinct thing I've heard all month.
–
tylerlNov 24 '12 at 18:11

+1 great answer. A thought: in your Enigma example, the Allies stopped cracking passwords for a few months after the new algorithm was introduced and until it was leaked, so what would have happened if the algorithm was not obscured? That would possibly have been an advantage for the allies, and time mattered. I would argue that since the Nazis were seemingly unable to build a machine with safer passwords, they used the best they had, which was Enigma, and that the short time bonus was still valuable to them. So not publishing the new Enigma algorithm was probably a good idea.
–
Felix DombekNov 26 '12 at 1:08

2

@FelixDombek If the Enigma had been designed from the ground up to be a published design, and for its security to lie in the key, the efforts to crack it would have had to focus on the key rather than the machine. In that case, even having access to Enigmas would have removed the need to build any, but it would not fundamentally have changed much: you'd still have needed to break the key, and with a good algorithm, that basically means trying every possible key. Make the key long enough and that simply is not feasible. Keys are also in some ways easier to safeguard.
–
Michael KjörlingSep 2 '14 at 9:07

1

Now, designing something such that it can be made public without severly impacting security, and actually making it public, are two different things. Look at today: I have little doubt that the vast majority of the security in military encryption rests in the key (and key distribution), but that doesn't mean that every military encryption algorithm is made public as due course, only that if that were to happen security would not be greatly adversely affected. Particularly in such situations, there is no need to make things easier for your adversary than you need to.
–
Michael KjörlingSep 2 '14 at 9:09

There are two important differences between algorithms vs. passwords/keys:

You can (and should) change cryptographic keys routinely -- or when a compromise is suspected. This mitigates loss of secrecy. Similarly, you should change your password whenever you have reason to suspect that the password may be compromised. In contrast, it is rarely feasible to change the encryption algorithm that is used in a timely fashion, so you have to be able to survive (without loss of security) situations where the algorithm becomes known.

A copy of the algorithm is included in every piece of software that can encrypt or decrypt. This means that all it takes is one user of the software to reverse-engineer the algorithm and publish it on the Internet, and the algorithm isn't secret any longer. In other words, you can't reasonably expect to keep the algorithm a secret. In contrast, your password or your encryption key is stored only on your machine -- not on the machine of millions of other users.

These reasons were articulated by Auguste Kerckhoffs back in 1883 -- yes, over a century ago. (It seems those folks were no dummies!) See Wikipedia on Kerckhoff's principle, especially rule 2 ("[The algorithm] must not be required to be secret, and it must be able to fall into the hands of the enemy without inconvenience").

Changing passwords routinely is at odds with an important feature, i.e. the ability to remember passwords. It can harm security. For instance, at my work place, they force me to change my passwords every 42 days, and this forces me to write them down on pieces of paper (or, alternatively, to use "witty" passwords which can be remembered easily, but are also guessable).
–
Thomas PorninNov 24 '12 at 13:12

One thing everyone here had not mentioned is that presupposing that the algorithm public knowledge makes it much easier to prove that a security scheme is truly secure. By assuming that your attacker knows the scheme you used, it is easier to prove that a scheme is insecure. Therefore, you would know to avoid using the scheme.

Securing algorithm is much more difficult than securing the key for the various reasons mentioned on the other posts, therefore if someone can get the key, it's very likely they would also be able to obtain the algorithm; but the reverse is much less likely.

Additionally, it's a general security principle that the less the scheme relies on to maintain its security, the more secure the scheme is. Ultimately, this boils down to: on the most secure schemes, only the key is required to be kept secret.

This does not necessarily mean you should be publishing the algorithm you uses. Obscurity can be used as a layer of defence (although, as mentioned, publication would object the scheme to more public scrutiny), but obscurity should never be used as the only layer of defence.

Keeping your crypto secret is not feasible. You need to have it out there, if it was just so people could test it if it actually works. I don't see any reason to keep crypto secret. You can put a lock on the door everyone can see your lock everyone knows how your lock works and knows that without the key it's useless. The password is your key.

Besides all that there are a few golden rules like 'don't reinvent the wheel' and 'don't try to create your own crypto' . We have crypto standards because they can test it on mass scale. Furthermore refer to Kerckhoffs's principle

There are only a handful of good algorithms, while there perhaps 1040 good passwords with 20 characters or fewer. So it's quite easy for an attacker to guess your algorithm or enumerate all likely algorithms, while it's impossible for them to do the same with a well-chosen password. Imagine the situation if users had to pick their password from a list of ten possibilities, and it was the same list for everyone -- that's basically the position for algorithms.