This question came from our site for professional programmers interested in conceptual questions about software development.

2

So, if you get a wrong answer, then access is granted?
–
Joachim SauerNov 19 '12 at 10:41

1

Yea, the actor that gives a wrong answer that the other person is expecting. e.g. 3, but if someone tried to give a wrong answer of 5 then it would still not be right and access would be denied.
–
loosebruceNov 19 '12 at 10:48

6 Answers
6

Edit 2: Since this has been migrated to Security.SE, I should probably preface this with with: I'm not a professional cryptographer, and there are many, many reasons why you should never roll your own security. Having said that:

It's a form of challenge-response authentication (with different challenges being sent each time). The algorithm to find the correct answer is essentially the "secret" or "password". You are open to a number of attacks, but essentially, anyone who can figure out your algorithm can break this security, and it is relatively trivial to reverse engineer executables.

Edit: To put this into perspective, consider Wikipedia's summary of Kerckhoffs's principle and Shannon's maxim:

In designing security systems, it is wise to assume that the details of the cryptographic algorithm are already available to the attacker. This principle is known as Kerckhoffs' principle — "only secrecy of the key provides security", or, reformulated as Shannon's maxim, "the enemy knows the system".

In your case, you've come up with a secret algorithm, but have no real "key" (there is no shared secret between the two endpoints, other than the algorithm). Once someone figures out the algorithm, the security is broken; so at best you've raised the bar slightly for people trying to figure it out. Rest assured though that anyone who really wants to break it, will do so. In that situation, you've also got a difficult problem of having to replace the compromised algorithm (as opposed to simply switching out the compromised key).

There's a reason why modern cryptography prefers large keys over complex algorithms - no matter how complex the algorithm is, it can (and will be) reverse engineered.

I think you're aiming for some form of Challenge-Response protocol, but this is so trivially weak it's ridiculous. Don't even call it security...

The "Obscurity" aspect here is that the total security of the entire mechanism rests on the fact that the attacker has no idea what this mechanism is. The moment he has any information at all, it all breaks down.

This is why Security by Obscurity is most often shunned by anyone with any security experience (notwithstanding the fact that in certain scenarios, Obscurity can add limited value. But not here.).

TL;DR: Don't do this.

It might have a more specific name, but it's a trivial form of handshaking:

In information technology, telecommunications, and related fields, handshaking is an automated process of negotiation, that dynamically sets parameters of a communications channel established between two entities before normal communication over the channel begins. It follows the physical establishment of the channel and precedes normal information transfer

Although I'm not an expert in security, what you're describing is far from secure, it's a simple exchange of tokens and it's vulnerable to all kinds of attacks. "3" is essentially your password, and it's a password that makes some sense semantically, which is a bad thing. All I need to do to brake your network is throw random numbers at it. You could at least change that to "blue", for example. I'd still get in, but it would probably take me a bit longer.

You should spend some time researching multi factor authentication, you'll soon come up with a better way of identifying your applications.

as currently stated it would seem to be trivially broken by a replay attack, regardless of going to multi-factor authentication I think you would probably need to improve this part on its own as it currently is essentially transmitting the password in plain text
–
jk.Nov 19 '12 at 11:10

Thanks , at the moment all it is two web applications communicating over HTTPS using POST variables.
–
loosebruceNov 19 '12 at 11:24

Aside from the matter of it's name, I'd consider it quite weak security. The reasons why are outlined in the other responses, so just as a more general discourse from here.

Security is hard. That's why security experts recieve huge pay packets and that's why modern operating systems have so much focus on security but still sometimes manage to get it wrong.

By implementing your own security you're showing that you may have a bad case of the "not invented here"s. I'd encourage you to drop that and just use the security built in to your platform - no matter how much you think you may do better, you can't. Plus your system administrators will thank you for using integrated security rather than giving them yet another unnecessary layer of additional security to manage, and your end users will thank you for not forcing yet another username/password combo on them.

It's called a "password". It's a fairly insecure one (one character), but it's a form of shared secret nonetheless.

On the other hand, if the question was "What's the surface area of Ohio" and the answer was the last line of the Gettysburg Address base64-encoded. Well then you'd have a better password, and better security.

In fact, you've seen this same system in place almost everywhere. Typically the question is "what is your password", to which you respond with some hopefully-unpredictable string. But extending the system such that you respond with different passwords for different prompts is just an extension of the same concept.

Security by obscurity. A question to which the expected answer is wrong when given only the information presented in the challenge requires some additional "secret" knowledge that only "in" software has. The trouble is that the software is in the hands of your attacker, who can decompile it to discover the secret. This is therefore very weak, because it relies on a static secret of low entropy (the "secret" is to add 1 to the answer to the stated question).

If two programs must trust each other, each knowing that the other cannot be 100% guaranteed to be who they say they are and not an impostor, the usual method is some soft of "independent verification"; if a trusted third party says that this program is who it says it is, that's "evidence" that you can use to increase your confidence.

Certificates are one form of this verification; a server wishing to prove itself obtains a certificate from an independent third party, which has had its information encrypted using a private key that is not given to the server, but which can be read by anyone who requests the server's certificate, using a public key distributed independently by the third party. The server (or an attacker wishing to mimic this location) therefore can't change the information in the certificate, and so as long as the information matches the actual location and public identifiers of the server, clients can be confident the server is who it says it is.

Without certificates, most systems rely on a "zero-knowledge" evidence model. Zero-knowledge proofs usually still require some sort of third party, which distributes the pieces of evidence that programs use to answer challenges. In the real world, this is usually an authentication server. The difference is that nobody has to know everything about the authentication scheme, and the information used can be obtained in real time and thus can change every time it is performed.

Here's an example: Alice is greeted by Bob, whom Alice does not trust. Bob says he knows Cindy, and therefore, he says, he's trustworthy. To prove this fact, Alice calls Cindy, who knows Alice, and asks for half of an asymmetric key pair. She then challenges Bob to encrypt a secret message that can be decrypted by Alice's key. Bob calls Cindy, who also knows Bob, and gives him the other half of the key pair. Bob encrypts the message, which Cindy never knows, and gives it to Alice, who decrypts it with her key and gets the original message. Bob couldn't have encrypted the message correctly without knowing the other half of the key, and the only way he could have gotten that is from Cindy. Cindy, for her part, never knows the secret message, so she can't give Bob the message to send back unless Bob tells her (and if Cindy had asked, Bob would be suspicious that maybe Cindy isn't who she says she is).

In the real world, Alice and Bob would be programs used by end users (maybe the same end user), and Cindy would be a central authentication system. The end users of the two programs would have offline secrets (username/password) they'd use to authenticate with the central system, and once that's done the programs can prove to each other that their end users are valid users on the system, without either app knowing the credentials of the other user, or the central service knowing the secret passed between the two programs as part of their handshake.

In order for this kind of scheme to be broken, an attacker David must either convince Cindy that he's actually Bob, or he must also have an accomplice Emily, who must convince Alice that she's Cindy, before freely giving Bob the other half of the key she generated for Alice. How Alice knows that Cindy is really Cindy and not Emily, and how Cindy knows Alice and Bob are who they say, require their own schemes with their own secrets. Those schemes can involve third parties as well, but eventually you run out of third parties; at some point you must rely on the transfer of an offline secret, such as a set of user credentials, to verify that someone is who they say without having to consult a third party.