(Image credit: psicologiaclinica)
The other day I wrote a long post describing in detail how we used to and how we now store customer passwords. Some people were surprised that we were open about this, and others wanted to understand exactly what part of a security system needs to be kept secret.
The simple answer is: a security system is only secure if its details can be safely shared with the world. This is known as Kerckhoff's Principle.
The principle is sometimes stated as "a cryptosystem should be secure even if everything about the system, except the key, is public knowledge" or using Claude Shannon's simpler version: "the enemy knows the system".
The idea is that if any part of a cryptosystem (except the individual secret key) has to be kept secret then the cryptosystem is not secure. That's because if the simple act of disclosing some detail of the system were to make it suddenly insecure then you've got a problem on your hands.
You've somehow got to keep that detail secret and for that you'll need a cryptosystem! Given that the whole point of the cryptosystem was to keep secrets, it's useless if it needs some other system to keep itself secret.
So, the gold standard for any secret keeping system is that all its details should be able to be made public without compromising the security of the system. The security relies on the system itself, not the secrecy of the system. (And as a corollary if anyone tells you've they've got some supersecret encryption system they can't tell you about then it's likely rubbish).
A great example of this is the breaking of the Nazi German Enigma cipher during the Second World War. By stealing machines, receiving information from other secret services, and reading the manuals, the Allies knew everything there was to know about how the Enigma machine worked.
Engima's security relied not on its secrecy, but on its complexity (and on keeping the daily key a secret). Engima was broken by attacking the mathematics behind its encryption and building special machines to exploit mathematical flaws in the encryption.
That's just as true today. The security of HTTPS, SSL and ciphers like AES or RSA rely on the complexity of the algorithm, not on keeping them secret. In fact, they are all published, detailed standards. The only secret is the key that's chosen when you connect to a secure web site (that's done automatically and randomly by your browser and the server) or when you encrypt a document using a program like GPG.
Another example is home security. Imagine if you bought a lock that stated that it must be hidden from view so that no one knew what type of lock you had installed. That wouldn't provide much reassurance that the lock was any good. The security of the lock should depend on its mechanism and you keeping the key safe not on you keeping the lock a secret!
(Image credit: paul.orear)
When storing passwords securely we rely on the complexity of the bcrypt algorithm. Everything about our storage mechanism is assumed to be something that can be made public. So it's safe to say that we choose a random salt, and that we use bcrypt. The salt is not a key, and it does not need to be kept secret.
But even more we assume that in the horrible case that our password database were accessed it will still be secure even though the hashed passwords and salts would be available to a hacker. The security of the system relies on the security of bcrypt and nothing else.
Of course, as a practical matter we don't leave the database lying around for anyone to access. It's kept behind firewalls and securely stored. But when thinking about security it's important to think about the worst case situation, a full disclosure of the secured information, and rely on the algorithm's strength and nothing else.