I have a mobile application and a server. The mobile application communicates with the server through a web services framework via SSL.

The mobile application allows customers to pay for transactions. The server authenticates the mobile application on every web service call via a username, password, device id and digital signature.

The device id ensures that the mobile account is being accessed from the device that was used to create the account. Therefore, a hacker requires physical access to the smartphone, even if he knows the username and password of a mobile account.

The digital signature is computed using a private key which is hard-coded inside the mobile application. Every message sent from the mobile application is accompanied with a digital signature computed using this private key. This ensures that the message is coming from the mobile application and that the contents of the message were not altered while in transit.

What would be the consequences if this private key was discovered? Will a hacker be able to modify information that is being transmitted via SSL and still recompute a valid digital signature using this private key? Would it be better if the mobile application generates a public-private key pair and send the public key to the server, or would this be unneccessary? Thank you so much.

2 Answers
2

The device id ensures that the mobile account is being accessed from the device that was used to create the account.

I stop you right there. This does not, and cannot work, at least with stock hardware. From the server, you may receive a message claiming to originate from a device with a specific ID, but this in no way proves that the said device was really involved in the operation.

The fundamental reason for this is that the potential attacker can know, by definition, everything that is not secret, and basic hardware contains no secret -- especially hardware that the attacker has access to.

The username and password do not authenticate the device or the application; they authenticate the user. If the user is the potential attacker, then this name and password won't stop him in any way: he knows them, there again by definition. The same goes for a private key embedded in the application code: the attacker can extract it through rather simple reverse engineering, and therefore such a signature proves nothing at all.

To have some real device authentication, you need heavy artillery, including some tamper-resistant hardware elements that you will not find, or be able to use, in existing smartphones.

All of the above is for the security model where the user is the attacker. In that model, the user wants to access your server, but not from "your" app; instead, the user wants to use his own special client code, which allows him to do some operations that are formally forbidden. This is the security model of most online games: the "attacker" is a game player who wishes to obtain some advantage through a modified client application, for instance by displaying the positions of the other players, known to the client application in order to maintain the game dynamics, but normally not displayed. See the Wikipedia page for some other examples.

The bottom-line is that this security model cannot be maintained in the long run, although some mitigation measures can be applied to keep the nuisance to a low level, at least as long as what is at stake does not have a great value.

You might want to use another security model where the user is not the attacker, but a potential victim, and you want to protect the user data, his requests to your server and the responses, from malicious alterations from the outside.

In that model, SSL is sufficient. That's what SSL was designed for, and it works. A signature from a private key hardcoded in the application code brings no extra safety: since every instance of the app contains the private key, it must be assumed that the attacker already has it. Assuming that the attacker can break through the SSL, then it is easy for him to recompute the signature on the altered data. Fortunately, breaking through the SSL is far from trivial.

One way to state it is that a private key which is copied into thousands of application instances, on thousands of mobile phones, cannot be really private. But once it is public, it no longer has any value; a private key is worth only as much as it is private.

Then there is a third security model in which the attacker is again the user, but with a distinct goal. Instead of trying to run a modified application (e.g. an application which follows the protocol but leaks some extra information), he tries to send fake, altered requests to the server. In that model, a signature can be useful, if you force the attacker to sign every request. With the user name and password, your server already knows which user it is talking to. A signature on the request (and not a session key, as occurs e.g. in SSL with certificate-based client authentication) can potentially be turned into a convincing proof that could be shown to a judge, if things go legal.

For this to work, you must not use one shared private key, but one key per user. It also requires that you can demonstrate that your server could never have obtained a copy of the private key of a user; otherwise, there is no proof. This is a complex issue, and note that I used the word "potentially": legal matters depend on the jurisdiction and cannot be reduced to simple technical tools.

Summary: SSL, when used correctly, protects data in transit against alterations and eavesdropping by outsiders. An extra signature with a shared private key does not bring any additional benefit.

To protect against a modified client, and/or to authenticate the client device (as opposed to the human user), a signature with a shared private key does not help either. For that, you would need some extra client-side hardware. It may be possible to change the context by trying to make the user responsible for what he sends, but for this, again, a signature with a shared private key won't work.

Thank you so much for your detailed answer. I appreciate the time and effort you have taken to answer my question. Sorry for taking so long to respond but I was very busy lately. I am very grateful.
–
MatthewAug 27 '13 at 18:05

Any private key distributed with the application is simply obfuscated, not secured. It WILL be discovered and it CAN be abused to falsify a request as if it was coming from an application with any device ID desired. You should produce a new, unique key pair for each device and manage the public keys associated with the devices on your end. It should not be trusted that the key will not be able to be taken from a particular device, but it does complicate things and requires compromising that device to impersonate it.

Let us assume that a hacker gets the private key of the mobile application. He still requires the username, password and device id to impersonate the user. The private key here is being used to verify that the data being transmitted via SSL is not being altered while in transit (integrity). In your opinion, how does generating a public private key pair for each user improve the security of the system?
–
MatthewAug 16 '13 at 16:35

Thank you for your response :) I am just discussing the various possibilities and outcomes.
–
MatthewAug 16 '13 at 16:36

@Matthew - it prevents device one from being able to read device two's traffic. In certain modes of SSL, having the private key will allow for the connection to be monitored. That would allow capture of the device ID and username and password if such modes were used. Even if you use a mode that isn't compromised by that, it still removes any authentication value that the private key itself gives. At that point you are better off to use the private key on the server (so that someone can't fake your service) and have the clients simply connect over SSL. A shared private key gives no gain.
–
AJ HendersonAug 16 '13 at 16:58

The gain of each device having it's own private key is the same as the gain with any client certificate scheme. Each client is the only one with that key and thus can verify that they are the client that they claim to be.
–
AJ HendersonAug 16 '13 at 16:59

Thank you for your response. I would like to clarify that the private key hard-coded inside the mobile application is NOT the same private key that is used for SSL. A seperate set of keys are set up for SSL.
–
MatthewAug 16 '13 at 17:01