I'm designing a web application where users will exchange short messages with the server very frequently (for instance, a few characters every second). I want the whole communication to be confidential, but the (perceived) performance must be acceptable. I had this idea of putting HTTP and HTTPS to work together using the following scheme:

The browser will request two (fairly large) keys from the server via HTTPS, then store them in-memory (in principle, I'm thinking in one-time-pad keys, i.e. random data). Then, the contents from each message will be XOR'ed with the first key (without any reuse, of course) and sent via HTTP. The server will send the response in the same fashion, using the other key. When one or both keys are about to be used up, another one will be requested, and so on.

The rationale behind this is that both the XOR and the HTTP request are cheap for small messages, so the perceived performance will be good. The HTTPS calls, more expensive, will not be time-critical and will benefit from the larger scale of the exchanged keys. Another stream cipher might be used instead of one-time-pads, and in the future (when properly supported everywhere) WebSockets could be used instead of HTTP requests.

Has this kind of thing been used before? Would this properly secure the communication, or is this a stupid idea, for any reason? I couldn't see any shortcomming, but I'd like to hear the opinion of more experienced people before spending too much time on this.

2 Answers
2

You are talking about encryption only; you are forgetting about integrity, an often fatal mistake. Encryption protects you only against passive attackers, who can spy on everything everybody says, but can alter communications in no way. That's not a very realistic model. You need verified (i.e. keyed) integrity, and for that you want a Message Authentication Code. Or one of those nifty Authenticated Encryption schemes.

Now, a protocol which begins with a potentially expensive initialization step, resulting in a shared secret, which is then used to encrypt data an protect it with verified integrity... that's not easy to build without making any mistake. Fortunately, one such protocol already exists, with all the hard bits done the right way, and implementations already available. It is called SSL.

As @Rook says, SSL is very lightweight once the initial handshake has been done. A typical HTTPS client (say, a Web browser) will first open a SSL connection, and then keep it open for sending requests. An open SSL connection implies a very slight overhead compared with raw unprotected data: in practice, about 30 extra bytes per record (it depends on the cipher suite), each record able to hold up to 16384 bytes of data, so we are talking about less than 0.2% of size increase, and you would get that with any other protocol which ensures confidentiality and integrity anyway. On the other hand, your handmade scheme doubles the network consumption (the one-time-pad is not exactly free to send).

Moreover, even if the SSL connection gets somehow closed (e.g. because the server does not wish to keep any connection open for more than 15 seconds of inactivity), a new one can be reopened with an abbreviated handshake which reuses bits of the previous handshake; in particular, it requires less messages (so reduced latency), it is symmetric-crypto only (so light on CPU), and it involves no certificate whatsoever, so it is simple and sane.

thanks for the detailed info, will surely help me avoid many future mistakes. My only remaining question is whether a short message would still transmit a lot of data (i.e. you said a record can hold up to 16KiB, but will it always send/receive that much? Is it the size of a block?)
–
mgibsonbrJan 18 '12 at 23:32

@mgibsonbr: normally, SSL implementations will aim at using the largest possible records (i.e. 16 kB). So they send shorter records only when they have no choice, i.e. they need the data to go as is and promptly (for instance, if a client is sending a request, and the request is complete, the client will do nothing before having received the answer, so the request has to go, even if it is shorter than 16 kB). In practice, the overhead of SSL (after the handshake) is really negligible, and as small as can be hoped for.
–
Tom LeekJan 19 '12 at 0:07

The short answer is that this isn't necessary and if anything its going to be less efficient and less secure.

The most expensive part of HTTPS is the initial handshake. The client has to validate the certificate, perform an OCSP lookup and some other overhead. At the end of this handshake a "master secret" is created which combines random numbers generated by both the client and the server. After this master secret is created, its a really light weight system. Its just encrypting data with a symmetric key, which is basically what you are suggesting

Just use HTTPS for everything, its a very light weight protocol. The client will reuse this master key without needs to renegotiate. Everything will be secure and efficient. No need to reinvent the wheel.

thanks for the feedback. I suspected such a "clever" idea couldn't be novel, but being as inexperienced as I am I wasn't sure. Glad I asked before wasting too much time on this...
–
mgibsonbrJan 18 '12 at 23:18