Add the implementation of TLS 1.1 to NSS's libSSL.
The interesting/tricky part of this work is likely to be augmenting libSSL's
API for enabling and disabling specific versions of SSL/TLS. IMO, rather
than adding yet another boolean socket option similar to SSL_ENABLE_SSL2,
SSL_ENABLE_SSL3, and SSL_ENABLE_TLS, we should have an API function (or set
of functions or socket options) that allow the caller to set the minimum and
maximum version number that the caller wishes to enable, and perhaps also to
retrieve the minimum and maximum versions supported by the library.

Kuat emailed a patch to the NSS team. It was based on NSS 3.12.4 sources.
Of course, we need patches that apply to the tip of the trunk, so I attempted
to bring that patch forward to the trunk. I had to make a few changes to get
it to apply, which I did quickly. Kuat, please check this patch and make
sure it's faithful to your original.
I have not yet done an in-depth review of this patch, however I have a few
preliminary observations.
1. Numerous lines of patched source code are longer than 80 columns. They must
be wrapped. I wrapped the ones that I had to change to get them to apply to
the trunk, but none of the others.
2. The patch to the ssl.h header file claims that the new TLS 1.1 functionality is enabled by default, but as I read the patch, the flag opt.enableTLSv11 is
not set to 1 in ssl_defaults (and should not be, for reasons of backward compatibility). So, the only way to set this flag is to call SSL_OptionSet
for option SSL_ENABLE_TLSv11, and no code in this patch does that.
3. The patch does not include any changes to the test programs (e.g. tstclnt,
strsclnt, selfserv) with which to test the new functionality. This makes me wonder how well this patch has been tested.
4. Option SSL_ENABLE_TLSv11 has been added to SSL_OptionSet, but not to
SSL_OptionGet, SSL_OptionGetDefault, or SSL_OptionSetDefault.
I'll try to do an in-depth review on this patch soon (this weekend, perhaps).

There has been a bunch of email correspondence about this enhancement.
It really should be taking place as comments in this bug, IMO. I will
past the text from two of the numerous emails here for posterity.
I don't think I'm breaking any confidence in so doing.
In reply to some other emails, I wrote on 2010-05-11:
> We really do not want to extend the present approach of having a
> separate boolean flag ("socket option") for each version of SSL/TLS that
> is supported. There are numerous problems with that approach. Let me
> name just a few.
>
> 1) The flags give the application the false impression that it can enable
> and disable any combination of versions that it wants, such as (say)
> enabling SSL 3.0 and TLS 1.1 but not TLS 1.0, but this is not really a
> reasonable thing to do. The only reasonable approach is to define a
> range (minimum, maximum) and enable/support all versions in that range.
>
> 2) The flags are unpredictable. We don't know, in advance, what option
> number will be assigned to future versions of TLS. This makes it difficult
> to create a general purpose API for asking what versions of TLS are
> supported by a particular installed shared lib. We need an API that lets us
> easily get the range of version numbers supported by a shared lib, even if
> that range is wider than what the caller understands, and also lets us set
> the range.
>
> INTERNALLY, the flags are OK, but as an external API, they are not.
... and in truth, they're not so good for internal use, either.
I envision an API that defines a new struct something like this:
typedef struct SSLVersionRangeStr {
SSL3ProtocolVersion min;
SSL3ProtocolVersion max;
} SSLVersionRange;
and 5 get/set functions that let the caller
1) ask what range is supported by the library
2) ask what range is enabled by default
3) set the range that is enabled by default
4) ask what range is presently enabled on a specific SSL socket
5) set the range that is enabled on a specific SSL socket
(If these could be combined into the existing socket option functions, that
would be best, IMO).
On 2010-05-12, Kuat Eshengazin wrote:
> thank you for your comments and code review.
> Actually patch was provided only to show my progress, not for deep review.
> I think I've got enough input for another couple of days of work.
Regarding my proposal to use a range of supported values, he asked:
> I wonder how practical is this to ask for a list of supported
> protocols from library?
> Can you please provide a practical use case ?
I was proposing to offer a range, not a list. It would be inclusive of all
versions between (and including) min and max. The value is simply that an
application can choose to ask for (set) all supported versions in range,
and be certain that it is not asking for features that are not supported,
which would presumably cause an error response.
> I will show up with polished version of a patch soon.
OK.

Nelson, thanks for such thorough and voluminous response.
I agree with the most of comments. Just couple of qualifying questions:
> You should have a symbolic constant (#define) for the size of the longest
> IV (longest block size), and use that here rather than the magic number
> 16. NSS may already have a suitable constant defined somewhere. If not
> please define one. I'm not sure where it should go. In some header file.
> Probably not in ssl headers though, but rather in pk11wrap.
What if I just introduce second argument like 'int iv_size' ?
>+ ssl3_SetExplicitIV(ss);
> That function can fail, and this code should detect that failure and not merely
> ignore it, IMO. This is the basis of my r- review.
Which particular error code should be set here in case of failure ?
Is it ok to say PORT_SetError(SSL_ERROR_IV_PARAM_FAILURE) ?
Regarding encryption step after PK11_GenerateRandom. I tried to comply
with algorithm provided by RFC4346 and keep IV setting logic in one place.
Futhermore, If I'm not mistaken prepending the plaintext with generated random
like you propose, would cause in extra syscall (memcpy) per TLS record, IMO it's better to avoid that.

> > You should have a symbolic constant (#define) for the size of the longest
> > IV (longest block size),
>
> What if I just introduce second argument like 'int iv_size' ?
This is C99 specific. I do know know if all our client compilers are C99 compliant. Mozilla has a long tail of lower tier platforms. (There are some that even compile NSS on OS/2!!). We should determine this before we check in.
> >+ rv = PK11_GenerateRandom(R, R_len);
> >+ if (rv != SECSuccess)
> >+ goto fail;
>
> Now, let me ask a question while thinking aloud here:
> Is it necessary to do the following encryption step?
> Can't you just skip it, and treat the initial block of random data as
> the IV, rather than encrypting it first?
Short Answer: No.
Long Answer: Encrypting it is what causes the "new" IV to get picked up by the encryption engine. The effective IV is really the encrypted value of your random number xor with the previous block.
Protocol
IV in clear
E(k,PT1 xor IV) = E1
E(k,PT2 xor E1) = E2... etc.
This code
E(k, R xor E-1) = E0
E(k, PT1 xor E0) = E1
E(k, PT2 xor E1) = E2 ... etc.
As you can see IV and E0 are synonymous. Someone expecting the protocol will
take E0 as the in the clear IV and properly recover PT1.
If you just end R in the clear you get:
R
E(k, PT1 xor E-1) = E1
E(k, PT2 xor E1) = E2 ... etc.
In this case someone following the protocol will end up decoded PT1 xor E1
xor R. You can save the encryption by sending E-1 as:
E-1
E(k, PT1 xor E-1) = E1
E(k, PT2 xor E1) = E2 ... etc.
But in this case you have reintroduced the potential attack that sending the IV on each block was meant to fix. (If attacker can control PT1, he knows E-1 because he saw the last block, he can get some information about E(k, AT) or any AT he cares to provide).
We can either encrypt R or we can:
GenerateIV.
Send IV in clear.
Finalize(encryption_context).
Free(encryption_context).
CreateContextBySymkey(key, IV).
Start encrypting again.
I think a single block encryptions is probably cheaper (clearly cheaper if we are talking a hardware token).
bob

> You should have a symbolic constant (#define) for the size of the longest
> IV (longest block size), and use that here rather than the magic number
> 16. NSS may already have a suitable constant defined somewhere. If not
> please define one. I'm not sure where it should go. In some header file.
> Probably not in ssl headers though, but rather in pk11wrap.
Looking through pk11wrap code I didn't found any suitable #define or variable for this.
Below the most relevant definitions I found in freebl:
lib/freebl/blapit.h:#define AES_BLOCK_SIZE 16 /* bytes */
lib/freebl/rijndael.h:#define RIJNDAEL_MIN_BLOCKSIZE 16 /* bytes */
lib/freebl/rijndael.h:#define RIJNDAEL_MAX_BLOCKSIZE 32 /* bytes */
Semantically they are related to AES merely, but IMO it's OK to use AES_BLOCK_SIZE definition
Otherwise I'm going to define something like
#define SSL3_MAX_BLOCK_SIZE 16
in ssl3prot.h or sslimpl.h (right plase?)
The longest IV defined by TLS 1.1 is 8, and 16 in TLS 1.2

Kuat,
Please, without delay, attach a patch that makes the changes to strsclnt, tstclnt, and selfserv that enables them to test this new code in your
existing patch, as it now is, so that I can play with it too.
I'm OK with adding a #define to ssl3impl.h. Let's call it TLS_11_MAX_IV_SIZE.
It will be OK to use a strange prefix, because ssl3impl is a private header
file.
Your new function ssl3_SetExplicitIV appears to me to correctly and properly
set an error code for every path that returns secfailure. Therefore, it is
not necessary (or desirable) for its caller(s) to set an error code. They
should just allow the error code that has already been set to remain. The
crucial element is that they must at least DETECT when it returns SECFailure
and take appropriate error action.

This patch adds use of SSLVersionRange API to tstclnt, strsclnt and selfserv
But it breaks existing ssl tests because '-T', '-2', '-3', '-S' options with this patch behave slightly different. Provided for tests only.

In the existing code there is an effort made to avoid multiple calls to cwSpec->encode when the plaintext is small, and multiple calls to cwSpec->encode can also be avoided when compression is enabled, because the IV, compressed plaintext, MAC, and padding are all contiguous. The patch adds an unconditional call to cwSpec->encode so that cwSpec->encode is always called two or three times. I had already written a patch to add TLS support before I saw Kuat Eshengazin's effort, which will cause cwSpec->encode to be called either one time (when compression is enabled or when the plaintext is small) or three times (compression is disabled and the plaintext is large). I would like to contribute my patch but of course it conflicts with Kuat's work.
In particular, my patch writes the IV directly into the destination buffer (as suggested in comment 3), so the SSL3_MAX_BLOCK_SIZE stuff from comment 6 would not be necessary.
The explicit per-record IV was added in TLS 1.1 to prevent a known-plaintext attack on CBC mode. However, to protect against that attack, a cryptographically random IV is only needed when the application changes its output in response to some external event. Or, in other words, when the application sends multiple records at once, only the first of those records really needs to have a cryptographically random IV; the rest of them can use a less expensive (fixed) IV.

Re range patch...
It seems like the logic is inverted. Instead of mapping the ranges to individual 'set' options (SSL3 TLS1.0 TLS1.1, TLS1.2, etc). It seems that we should keep the ranges internally and have the individual set options 'expand' the ranges appropriately (example: Enabling SSL3 would expand either the min or the max range to include SSL3.)
What I would expect is that all those places where you have to add && TLS1.1, you will likely need to add && TLS1.2 as well later. Instead it should be if SSL min > TLS (or SSL3 depending on the code).

Brian, if you have a working patch for some of the direct stuff, why don't you send it to kuat.
I'm OK with trying to limit cwSpec->encode calls. They are particularly expensive on hardware tokens. I'm not so sure about trying to optimize when to pick a new random IV. I think I would error on the side of caution and just always pick a new random here. Since we are actually encrypting the bits, it's we aren't leaking entropy (except to the server).
bob

Re: comment 11
Brian, thanks for your comment. I'm totally agree with the idea of eliminating cwSpec->encode calls.
> the IV, compressed plaintext, MAC, and padding are all contiguous
This is partially true, because IV is not in the buffer initially.
You have to generate random R[iv_size] and prepend it to plain-text. But you cannot touch the pIn buffer. IMO it's tricky, and there is a risk to reintroduce the 305147 issue.
So really I wonder which technique you are using to set random IV ?
Regarding explicit IV: we have to comply here and TLS 1.2 says that (RFC 5246 6.2.3.2) "Initialization Vector (IV) SHOULD be chosen at random, and MUST be unpredictable".

I modified your patches a little, fixing ssl.def and changing tstclnt a little,
and then tested with this command:
tstclnt -h www.openssl.org -d DB -23 -T 0 -f -vv -c uvxy < stdin.txt
This enables only the AES 128 and 256 block ciphers, with and without DHE,
and should enable just TLS 1.1 (SSL 3.2).
www.openssl.org replied that it supports only SSL 3.1, not 3.2. :-(

Comment on attachment 447111[details][diff][review]
TLS 1.1 with new SSLVersionRange API
Even though the review request for this patch was removed today,
I have been reviewing this patch. I'm not done, but I think I must
give it an r- due to at least one apparent correctness error.
First, a minor issue. In ssl.def, the new functions must be
added in a new section for release 3.12.7. They cannot be added
to a section for an existing release. And they should be added in
ASCII collating sequence, which is roughly alphabetical order.
Now the correctness error that I believe I see is this.
In function ssl3_CompressMACEncryptRecord, if compression has been
enabled, the compressed data is placed into the output buffer at
wrBuf->space - SSL3_RECORD_HEADER_LENGTH, which is also exactly
where ssl3_SetExplicitIV will place the IV if it's TLS 1.1 with a
block cipher. So the IV will overwrite the first block of the
compressed plain text. I'm guessing that there has been no test
that uses both TLS 1.1 block ciphers AND compression. Please
correct me if my analysis is wrong.
Finally, note that ssl3_CompressMACEncryptRecord always does the
encryption with either one or two calls to the "encode" function.
It's trying very hard to avoid a large memcpy/memmove, but on the
other hand, it will do a memmove of up to 256 bytes if that will
allow it to make a single call to the "encode" function instead of
two distinct calls. With a separate IV, I think we have the
potential for one, two or three calls to the encode function.
A little care may be able to reduce the use of 3 calls significantly
just as the existing (unpatched) code reduces the use of 2 calls
significantly. I think that's a good enough reason to not make any
calls to encode in ssl3_SetExplicitIV, and instead put all the logic
for deciding when to call encode in ssl3_CompressMACEncryptRecord.

Re: comment 18
Thanks for review.
Nelson, I removed review request exactly because I've spotted this issue.
Off course functions that deals with IV should come first and subsequent operations have to use ivLen offset. My fault, I should have wrote a message
when removing review? flag.
Currently I'm re-factoring my patch, I turned ssl3_SetExplicitIV() into ssl3_PrependWithRandomR() which writes random buffer directly to wrBuf->buf + SSL3_RECORD_HEADER_LENGTH. But I ran into some problems here, please see the email from me (you are in CC).

(In reply to comment #14)
> > the IV, compressed plaintext, MAC, and padding are all contiguous
>
> This is partially true, because IV is not in the buffer initially.
> You have to generate random R[iv_size] and prepend it to plain-text. But you
> cannot touch the pIn buffer. IMO it's tricky, and there is a risk to
> reintroduce the 305147 issue.
> So really I wonder which technique you are using to set random IV ?
Technique 2(b) from section 6.2.3.2, just like OpenSSL. At the beginning of the output, add the cryptographically-random IV. If the plaintext is to be compressed, then append the compressed plaintext, the MAC, and the padding; everything will be contiguous, so just encrypt it with one call. Otherwise, you have to encrypt the IV, then encrypt the plaintext, then encrypt the MAC and padding.
> Regarding explicit IV: we have to comply here and TLS 1.2 says that (RFC 5246
> 6.2.3.2) "Initialization Vector (IV) SHOULD be chosen at random, and MUST be
> unpredictable".
Technically, that is overkill, but I am going to remove this optimization from my patch for now.

This patch reduces the number of calls to cwSpec->encode when compression is enabled and/or the plaintext is small.
It also makes the old versioning interface more sane, by having SSL2 controlled separately from the SSL3/TLS versioning. Without SSL2 being separate, then when the range is SSL2-TLS1.1, and we get SSL_OptionSet(fd, SSL_ENABLE_SSL3, PR_FALSE), then it isn't clear whether to disable SSL2 and SSL3, or to disable SSL3 and TLS. With SSL2 separate, SSL_OptionSet(fd, SSL_ENABLE_SSL3, PR_FALSE) always just disables SSL3, SSL_OptionSet(fd, SSL_ENABLE_TLS, PR_FALSE) always just disables all TLS versions, and SSL_OptionSet(fd, SSL_ENABLE_TLS, PR_TRUE) enables all TLS versions. If you want to control TLS version range in a more fine-grained fashion, you need to use the new version range API. If you want to enable SSL2, you must do so separately from the new version range API (using the old SSL_OptionSet interface).

Comment on attachment 449687[details][diff][review]
Reduce calls to encryptor; make handling of old versioning API more sane.
Here are several comments that apply generally to all NSS contributions.
1. All patches MUST be submitted as CVS diffs against the CVS trunk,
not as Hg diffs against Firefox's dated downstream Hg repository.
2. For reasons of backward version compatibility, we cannot modify
the set of functions publicly released in prior release versions.
Therefore, when adding function names to any of NSS's .def files,
do NOT add them to any section for any existing release.
If there is a new section for a release still in development, you
may add symbols to that section. If not, add a new section for
the new release. Symbols in each new section should be sorted in
ASCII collating sequence. (Just use sort)
Thanks.

Kuat,
I filed bug 571722 that factors out your version API into a separate bug with a patch that is independent of the TLS 1.1 implementation. That way it can be reviewed, tested, and committed separately from the TLS 1.1 implementation. Please check out that bug along with the patch I submitted with it.
I also filed bug 571699, bug 571732, bug 571795, and bug 571797 which all affect ssl3_HandleRecord and/or ssl3_CompressMACEncryptRecord. Likely there are going to be conflicts and I'm working on rebasing your patch on top of the patches for those bugs and on merging my optimizations for reducing crSpec->decode and cwSpec->encode calls into your patch.

In reply to comment 23:
In order to minimize the merge conflict, or at least hasten the end of the
merge conflict, I believe I should give priority, as a reviewer, to the
patch associated with bug 571722. I will do that.

Comment on attachment 465703[details][diff][review]
TLSv11-v1.patch
I think it would be simpler and fewer lines of code to inline ssl3_PrependWithRandomR into the function that calls it.
>+ /*
>+ * As the RFC 4346 states
>+ * "The receiver decrypts the entire GenericBlockCipher structure and
>+ * then discards the first cipher block, corresponding to the IV
>+ * component."
>+ */
>+
>+ if (cipher_def->type == type_block &&
>+ crSpec->version >= SSL_LIBRARY_VERSION_3_2_TLS_1_1)
>+ ivLen = cipher_def->iv_size;
>+
>+ plaintext->len -= ivLen;
plaintext->len might be less than ivLen when the ciphertext length is less than ivLen. Because it is unsigned, this will cause plaintext->len to become very large, resulting in all kinds of badness. Similarly, bad things will happen when plaintext->len == ivLen, because this code will make plaintext->len == 0, which will cause plaintext->buf[plaintext->len - 1], i.e. plaintext->buf[-1], to be read later. You need to reject these too-short inputs (with an decode_error alert) before either of these things happen.
If the record was compressed, you need to set ivLen to zero after decompressing it into databuf->buf. Otherwise, the following line skips over the beginning of the decompressed data, and access to the end of databuf->buf would be past the end of the actual decompressed data:
>+ databuf->buf += ivLen;
I see that later you adjust databuf->buf to point to the start of the buffer again before returning, so that databuf->buf can be correctly freed later. But it is ugly to adjust the pointer to the start of a malloc()'d pointer like this. I suggest at least adding a comment about the importance of the later corrective subtraction of ivLen from databuf->buf.
I didn't look at the rest of the code. It looks like you modeled some of the ssl3_CompressMacEncrypt code on the patch I posted earlier. Did you find nay bugs in it? IIRC, there was an error in it, but I don't remember what it was right now.
Note also this section of the TLS 1.1 spec: "TLS 1.1 implementations MUST NOT negotiate these [export] cipher suites in TLS 1.1 mode. However, for backward compatibility they may be offered in the ClientHello for use with TLS 1.0 or SSLv3-only servers. TLS 1.1 clients MUST check that the server did not choose one of these cipher suites during the handshake."

> It looks like you modeled some of the
> ssl3_CompressMacEncrypt code on the patch I posted earlier. Did you find nay
> bugs in it? IIRC, there was an error in it, but I don't remember what it was
> right now.
>
Correct, I used your code that deals with encryption. There were some bugs with wrong offsets, but I believe I fixed them.
> Note also this section of the TLS 1.1 spec: "TLS 1.1 implementations MUST NOT
> negotiate these [export] cipher suites in TLS 1.1 mode. However, for backward
> compatibility they may be offered in the ClientHello for use with TLS 1.0 or
> SSLv3-only servers. TLS 1.1 clients MUST check that the server did not choose
> one of these cipher suites during the handshake."
Looking into it, I will update the patch, when I'm done.

The TLS 1.2 specification [1] changed the requirements for generating initialization vectors in CBC mode substantially, compared to TLS 1.1 [2]. It is best to ignore the TLS 1.1 rules in favor of the TLS 1.2 rules, because the methods suggested in the TLS 1.1 specification are unnecessarily inefficient.
Appendix C of NIST SP800-30a [3] (which defines CBC mode) gives two methods for generating suitable IVs. The first method is to use a secure PRNG to generate a new block. The second method is to encrypt (with the same encryption key used for the data) a block containing a counter. The current patch uses the RFC-TLS-1.1-recommended a doing both, which is overkill. Consequently, you can safely improve the performance of the code by simply replacing the call to PK_GenerateRandom with code that copies the record number into the IV block and zeros out the rest of the block.
[1] http://tools.ietf.org/html/rfc5246#section-6.2.3.2
[2] http://tools.ietf.org/html/rfc4346#section-6.2.3.2
[3] http://csrc.nist.gov/publications/nistpubs/800-38a/sp800-38a.pdf

Comment on attachment 466356[details][diff][review]
TLSv11-v4.patch
This API is about right, but the implementation of many of the range checks
is wrong. I will use this patch as a starting place for the next patch, and
will list Kuat as a contributor to it when I submit it.

(In reply to comment #37)
> Brian, I am mindful of your comment 32. At this point, I just want
> to see something working that is correct. Optimization can be done later.
It isn't just an optimization. It also relieves pressure for entropy on the PRNG.
Regarding version range checks, see the patch and discussion in bug 571722, especially bug 571722 comment 4.

(In reply to comment #40)
> Wait, so Kuat's patch (which I have revised) actually was mostly YOUR patch,
> Brian?
Kuat and I had the idea to add TLS 1.1 support at the same time. Some parts of my patches were based on Kuat's code and I think some of Kuat's code was based from mine. At one point I factored out the versioning parts of Kuat's patch into that bug.
I think it is better to figure out the versioning API and its implementation separately from implementing TLS 1.1 because that version-management code needs to be able to support TLS 1.2 well without API breakage in when TLS 1.2 support is added. Also, we need to decide what to do about SSL 2 as it complicates the API.

This patch implements my proposal for the API. It's still mixed together
with the TLS 1.1 work. I'd like you, Brian, to look at the API thus
implemented and give feedback. Essentially I took what was there and
expanded it so that that same VersionRange interface also controls SSL2.

It is actually really easy to pull out the TLS-1.1-specific bits of this patch and apply them on top of the patch in bug 571722. I would prefer that to be done unless that patch in bug 571722 is unsuitable and unfixable. I think it will also make reviewing the TLS-1.1-specific bits much easier. (Plus, I cannot review Nelson's patch since it incorporates my code but AFAICT Nelson can review mine.) I hope to be posting the TLS 1.2 patches soon so it is pretty natural for me to take over this TLS 1.1 work from Kuat if he's too busy.
There were two reasons I had SSL 2.0 controlled separately in that patch:
1. Backward compatibility of the old (deprecated) interface. The intention with my patch in bug 571722 is that you can enabled/disable SSL 2.0, SSL 3.0, and TLS 1.0 in any order and the results will be exactly the same as with current versions of NSS. For example: enable SSL 2.0, enable TLS 1.0, disable SSL 3.0. I do not know why someone would want to do that, but it was not too painful to support.
2. We will likely be deprecating and/or removing SSL 2.0 support soon, and having the SSL 2.0 flag separate makes it easier to see which parts of the code are sensitive to this change, while hopefully making the change less disruptive to the SSL3.0+ code paths.

> enable SSL 2.0, enable TLS 1.0, disable SSL 3.0.
I want to specifically disallow that. Ranges of enabled versions MUST be contiguous.
I want users of NSS's API to make one change where they stop using the version bits altogether and switch to the range-based API. Then I want them to make subsequent changes via the new range-based API to bring the minimum value up
to discontinue old versions.

(In reply to comment #44)
> > enable SSL 2.0, enable TLS 1.0, disable SSL 3.0.
> I want to specifically disallow that. Ranges of enabled
> versions MUST be contiguous.
1. Current versions of NSS don't enforce that.
2. Consider this sequence: disable all, disable SSL2, enable SSL2, disable TLS 1, enable TLS1, disable SSL3, enable SSL3. In current versions, this would leave all of SSL2, SSL3, and TLS1 enabled. With your patch, this would only leave SSL3 and TLS1 enabled.
> I want users of NSS's API to make one change where they stop using the version
> bits altogether and switch to the range-based API. Then I want them to make
> subsequent changes via the new range-based API to bring the minimum value up
> to discontinue old versions.
I agree.

(In reply to comment #45)
> 2. Consider this sequence: disable all, disable SSL2, enable SSL2, disable TLS
> 1, enable TLS1, disable SSL3, enable SSL3. In current versions, this would
> leave all of SSL2, SSL3, and TLS1 enabled. With your patch, this would only
> leave SSL3 and TLS1 enabled.
I forgot to add: sequences like this might come up when users can directly manipulate these settings (e.g. in Firefox about:config). Also, some products might have made assumptions (especially in their UIs) that these flags can be set independently. So, 100% backward compatibility for applications that use the old interface is really important.

Nelson, recently the desire for this has increased. What can I do to help get things rolling again? I would really like to use (a modified form of) the patch in bug 571722 + Kuat's patch + the efficiency improvements mentioned in comment 32.

As Jo Hermans #48 noted.
If Mozilla values the continued security of its products, it is imperative that TLS 1.1 and 1.2. be implemented sooner, rather than later.
I suggest a bump to priority P2 (preferably P1).

Missing this feature effectively means that every secure http connection I use is potentially compromised, because none of the browsers I use supports above TLS 1.0.
Even commenting to this bug needs login through TLS 1.0. (I don't even know if the servers supports TLS 1.1). So I cannot safely do e-mail and banking, securely through the net, and I don't think I am alone, so it would make sense to elevate the priority of this bug to P1, or even "above" that, because it should have been implemented yesterday. (and I am installing Opera right now)

@Adam Rak
I should add the webserver you are connecting to also has to support it.
Let's take www.paypal.com as an example: https://www.ssllabs.com/ssldb/analyze.html?d=www.paypal.com
It does not support TLS/1.1 and/or TLS/1.2
As I understand it many IIS-installations (Microsoft Windows webserver) has it disabled by default and almost no Apache installations don't support TLS/1.2. Only some installations of Apache support TLS/1.1

Unfortunately the server support is very poor, but on the bright side, THIS server (which has this bugzilla) does support TLS 1.1. (BUT ssllabs.com does support TLS 1.1/1.2!)
Many cites very poor browser support as the reason why they won't implement TLS/1.1 or 1.2
So that's a good reason to implement it on firefox.
IE market share: 38.9% -> implements it (only latests versions), hopefully working..
Firefox market share: 25.5% -> no implementation
Chrome 20.2% -> no implementation
(according to wikipedia, these percentages might be skewed a bit)
So if there are an implementation for Firefox and Chrome then we can say that significant percentage of the browsers do support it, so its more likely to happen at the server side too. (And implementing in Firefox speeds up the Chrome implementation too.) Someone has to make the first step after all. And you already have some patches so I don't worry too much for browsers. But a little more "noise" to convince people to elevate the priority won't hurt.

ENOUGH!
READ THIS if you're unhappy with this bug's priority, e.g. dziastinux, Matt Fahrner, etc.
Preface: Please, please, READ up on Bugzilla Etiquette before commenting.
I'd mark this bug as major or P1, except:
1) The bug isn't assigned to me, and only the bug assignee should be changing the priority/importance flags. (I edited Bug 480514 - Implement TLS 1.2, which wasn't assigned to anyone, and even that is arguably contrary to Bugzilla Etiquette - see https://bugzilla.mozilla.org/page.cgi?id=etiquette.html or the link on the home page - and in hindsight wasn't a good idea.)
2) It's the only bug in bugzilla assigned to Kuat Eshengazin, so presumably it's what he's working on, irrespective of priority or importance, and I think he knows what he's doing.
3) I don't have the money to pay someone else to develop patches.
**Please do NOT reply to this post, except in private email to me.**
(Kuat, if you're short on time or other resources to fix this, please ask for help. Perhaps boosting the priority could help you get such help; perhaps not. From the comments so far, this doesn't look like a simple enhancement to implement, and the latest patch (50K!) confirms that. THANK YOU Kuat, Nelson, Robert, and Brian who we can see have made substantive contributions thus far to fixing this!
Apologies for this bugspam, hopefully it cuts down on further bug spam.

(In reply to dziastinux from comment #56)
> TLS 1.0 is no longer considered as secure. So this bug MUST marked as major.
Adding a feature is adding a feature, it's severity "enhancement". The severity says nothing about a bug's priority. A crash is by definition severity "critical", but a crash that happens only when the full moon corresponds with the spring equinox and the user holds down ESC for 20 seconds is not going to be high priority. Severity and priority are two separate measures. Priority setting tends to happen outside bugzilla because it involves value judgements and advocacy, all of which obscure the facts and work going on in a bug. Discussions happen in the mailing lists, plans are posted on wiki.mozilla.org.
"Stop the BEAST attack" is a separate issue, and its severity might well be major or critical. It would be a separate bug--and in fact there is one--because there are potentially multiple ways to stop that attack. Implementing TLS 1.1 is not the best short-term approach because that only helps for servers that support TLS 1.1, and even then only if we don't support downgrades--but that breaks most current SSL servers.
I removed comment 59 because it was not helpful toward the goal of getting this bug fixed.

WRT to the IV generation strategy, as bsmith says, the requirement is that the IV be unpredictable. I'd want to consult a cryptographer before generating the IV as the encryption of the previous CBC residue and a predictable plaintext. It might be OK, but it doesn't match the CBC security model.
As I understand the API we have to work with, the only way to inject the IV is to encrypt a plaintext, so this means we effectively need to encrypt an unpredictable pre-IV to set the IV state. There are two major ways to generate the pre-IV:
1. Generate a random number.
2. Encrypt a counter (preferably with a totally different, random key).
I'm generally not too concerned with entropy exhaustion, so I tend to prefer #1, but #2 *would* have the advantage that it would only require 128-bits or so of extra entropy for the connection, rather than per packet. In terms of performance, I don't know whether PRNG operation is faster or slower than encryption in the PKCS#11 module, so I can't assess that.

Yes, you convinced me previously that we can use the "generate a random number and encrypt strategy" (#1) that Kuat used, and which is recommended in the TLS 1.1 specificiation (but not the TLS 1.2 specification). It is much simpler to implement than #2.

I'm not much of a cryptographer myself, but I have been reading up on IV-related stuff a bunch lately.
I am not sure how TLS (1.1 or otherwise) is using an IV, but IVs in general are required to *never repeat with the same key* and are often, but not always, also required to be unpredictable. (For instance, CTR mode requires the former but *not* the latter.) Generating random numbers does *not* guarantee no repeats with the same key, whereas encrypting a counter with a suitably-wide block cipher would (take care, though: for instance, a cipher with an 128-bit blocksize will never repeat a *128-bit* number, but the *64-bit halves* of that number might repeat). I agree that the key for this encryption should be totally different from the actual session key.

(In reply to Zack Weinberg (:zwol) from comment #63)
> Generating random numbers does *not* guarantee no repeats with the same key,
> whereas encrypting a counter with a suitably-wide block cipher would
> (take care, though: for instance, a cipher with an 128-bit blocksize will
> never repeat a *128-bit* number, but the *64-bit halves* of that number
> might repeat). I agree that the key for this encryption should be totally
> different from the actual session key.
We would generate the 128-bit AES key for the encryption in option #2 using the same mechanism we would use for generating a random IV for #1. For option #1, the problematic case is the one where the PRNG repeats itself. But, this would also be a problem for option #2, because that would the AES key would be the same in two instances.

Zack,
The security conditions here are complicated, but....
1. With CBC, encrypting two blocks with the same IV leaks information IFF those plaintext blocks are also the same. You learn that the blocks were identical. Otherwise you learn nothing.
2. With CBC, the issue isn't JUST the IV, but any two plaintext blocks encrypted with identical previous blocks. So, having guaranteed unique IVs doesn't help as much as you would otherwise think.
3. The limit on the number of blocks you can encipher safely due to collisions is thus set by point #2 at about 2^{b/2} where b is the block size. Since the number of blocks you encrypt is generally > than the number of records, this limit is tighter than the number of records you can encrypt, making random IV generation generally safe.
This is totally different from CTR, where the IV represents the state of the stream cipher and IV repetition is a disaster.

You're drawing from the entropy pool far less often in case #2, though, reducing the probability of a collision. On the other hand, a collision is far worse in case #2: *all* the IVs on that connection are now predictable.
Either way, for 128-bit numbers the collision probability is 2^-64, not 2^-128, because of the birthday paradox. Maybe that's still small enough to not worry about.

If we are concerned about entropy exhaustion then we should be more concerned about features like bug 440046, which proposes to let web applications extract as many random values out of the PRNG as they want.

(In reply to Eric Rescorla from comment #68)
> I don't know the details of the NSS PRNG, but a well-implemented PRNG can
> safely generate an arbitrary number of values from a suitably sized initial
> entropy pool.
Eric, this is the thing I am farthest from being useful on.
(In reply to Brian Smith (:bsmith) from comment #62)
> Yes, you convinced me previously that we can use the "generate a random
> number and encrypt strategy" (#1) that Kuat used, and which is recommended
> in the TLS 1.1 specificiation (but not the TLS 1.2 specification). It is
> much simpler to implement than #2.
This is what I am planning to do, unless there are any further objections.

(In reply to Brian Smith (:bsmith) from comment #69)
> (In reply to Eric Rescorla from comment #68)
> > I don't know the details of the NSS PRNG, but a well-implemented PRNG can
> > safely generate an arbitrary number of values from a suitably sized initial
> > entropy pool.
>
> Eric, this is the thing I am farthest from being useful on.
I haven't studied this completely because the code paths to the PRNG are so complicated.
However, it looks to me like the internal PRNGs, at least, are a fairly conventional CSPRNG
and therefore shoudl allow you to extract a large number of values safely. Perhaps Nelson,
Bob, or WTC can weigh in here for certainty.

Here is the patch that I will discuss with Wan-Teh tomorrow. Using this patch, plus the patch in bug 733642, I was able to make TLS 1.1 connections using AES-based cipher suites. I have not finished modifying the test programs and I haven't tested the code with compression enabled yet.

We were concerned that adding an additional call to crSpec->decode in ssl3_HandleRecord and/or an additional call to cwSpec->encode in ssl3_CompressMacEncryptRecord might be bad for performance. But, the BEAST attack mitigation has much more overhead than that, and we accepted that overhead. Consequently, even the most naive implementation of TLS 1.1 in libssl should give better performance than TLS 1.0 or SSL 3.0.

See http://bonsai.mozilla.org/cvsblame.cgi?file=mozilla/security/nss/lib/ssl/ssl3con.c&rev=1.168#6315
(In reply to Robert Relyea from bug #236245 comment #20)
> The #ifdef PARANOIA code probably should be #ifdef
> REMOVABLE_INSTALLABLE_CIPHERS or something like that. For servers it's fine.
> Currently we don't have any REMOVABLE_INSTALLABLE_CIPHERS (since we removed
> FORTEZZA), so we are OK for clients. If we ever have a case where a
> client-like application needs to negotiate with as a server (AIM doing peer
> to peer, for instance), and we ever have any conditional cipher suites that
> may be based on removable tokens (like FORTEZZA), we may need to turn this
> code on again (OK to have it off right now).
I believe that this is a good time to add more "PARANOID" behavior, as we will be using libssl in a peer-to-peer manner, and the cipher suite selection logic is more dynamic (since it depends on the TLS version, and will depend even more on the TLS version in TLS 1.2).

(In reply to Wan-Teh Chang from comment #75)
> Thank you for explaining the patch to me yesterday. I reviewed everything
> except the ssl3_CompressMACEncryptRecord function. I recommend we go for
> correctness and ease of understanding first, and then go for optimizations.
> I am especially worried about the changes related to ss->gs.buf.len,
> ss->gs.readOffset and ss->gs.writeOffset because I am not familiar with
> those fields.
The patch I just uploaded is much simpler. It doesn't change any of the processing of ss->gs, and it keeps the IV processing self-contained, in both ssl3_HandleRecord and ssl3_CompressMACEncryptRecord.
> Nit: align the '= 0' part with the other local variable initializations
> in this function.
:( I suggest that we avoid such alignment going forward. (In this function, there are already multiple styles of alignment.)
> Should ivLen == cText->buf->len be allowed? Does that mean zero-length
> data?
I changed the check so that it fails here only when ivLen is less than cText->buf->len, instead of less-than-or-greater. Later, we will fail because we don't have enough ciphertext for the MAC and/or padding. By making this change, we execute the same code path for TLS 1.1 that we previously did in these situations.
> If the crSpec->decode() call below will fail when cText->buf->len is
> shorter then ivLen, I'd prefer that we just let it detect and report
> this error.
The PKCS#11 semantics allow you to decrypt/encrypt partial blocks. The module will update its internal state and will output the blocks in the decrypt/encrypt call that supplies the rest of the block. I did not check exactly what crSpec->decode does to see if it is more strict than PKCS#11, so I left the check in.
> I suspect that bad_record_mac (for decryption error) is
> more appropriate than decode_error for this condition.
decode_error is better because this is a parsing problem, not a cryptography problem.
> The current approach requires us to check every use of ->data and ->len
> in this function (ssl3_HandleRecord).
The new patch requires us to inspect uses cText->buf->buf and cText->buf->len, which is much easier to do because they are used in fewer places.

This patch implements the restrictions on export cipher suites in TLS 1.1.
It turns out that libssl only implements two of the prohibited cipher suites. I have added comments in the code to explain how/why export cipher suites are blocked.
Perhaps I am overlooking something, but it seems like the code to double-check the version, cipher suite, and compression method used for a cached session could be simplified to avoid duplicate code.
I made the checks that were previously #ifdef PARANOID unconditional, based on the comments by rrelyea quoted above in a previous comment.
With this patch, and if the test suite is modified so that tstclnt and selfserv always enable all versions of TLS that libssl supports, then we will get test failures for the tests for the two export cipher suites that we support in libssl. I haven't completed the modifications to the test suite that will enable us to require these tests to successfully handshake when TLS 1.1+ is disabled and to require these tests to fail to handshake when TLS 1.1+ are enabled.

This is what I checked in. I made the following changes.
1. Brian added a SSL_MAX_IV_BYTES macro, next to the existing
SSL_MAX_MAC_BYTES macro in sslimpl.h. As far as I can tell,
SSL_MAX_MAC_BYTES is only used in SSLv2 code, and its value
of 16 is too small for HMAC-SHA-1. I found an existing macro
named MAX_IV_LENGTH, so I just use it.
2. The "See also SSL_MAX_IV_BYTES" comment that Brian added to
sslinfo.c is wrong because the code in question defines key
sizes of block ciphers, not their IV/block sizes.
3. I added code to ssl3_CompressMACEncryptRecord there is room
in wrBuf->buf before calling PK11_GenerateRandom. This is
guaranteed by ssl3_SendRecord, so I'll be happy to remove
this check.
4. I added code to ssl3_SendRecord to ensure that wrBuf->space
has room for the explicit IV for TLS 1.1 and later before
passing wrBuf to ssl3_CompressMACEncryptRecord.
5. I removed the output length checks after crSpec->decode()
calls in ssl3_HandleRecord. They are not necessary, unless
we don't trust crSpec->decode(). I am willing to add these
checks back because similar checks for cwSpec->encode() exist
in ssl3_CompressMACEncryptRecord. Sigh.
Patch checked in on the NSS trunk (NSS 3.13.4).
Checking in ssl3con.c;
/cvsroot/mozilla/security/nss/lib/ssl/ssl3con.c,v <-- ssl3con.c
new revision: 1.170; previous revision: 1.169
done
Checking in sslimpl.h;
/cvsroot/mozilla/security/nss/lib/ssl/sslimpl.h,v <-- sslimpl.h
new revision: 1.97; previous revision: 1.96
/cvsroot/mozilla/security/nss/lib/ssl/sslproto.h,v <-- sslproto.h
new revision: 1.17; previous revision: 1.16
done

Comment on attachment 605281[details][diff][review]
Implement TLS 1.1, except for restrictions on export cipher suites (as checked in), by Brian Smith
Nelson, Bob: I have a question for you.
In sslimpl.h, we have:
> typedef enum { type_stream, type_block } CipherType;
>
>+/* XXX Why is MAX_IV_LENGTH so big? */
> #define MAX_IV_LENGTH 64
This macro definition has been there since rev. 1.1. Do you know
why it is so big? All the block ciphers supported by NSS have a
block/IV size of at most 16 bytes (128 bits).

I don't know for sure, but at a guess it was probably a mistake. Before AES the biggest block size we supported was 8 bytes, or 64 bits. The define may have been chosen to cover that blocksize, but missed the bits to bytes conversion (a common mistake). In this direction the mistake wouldn't have caused any issues, so it wouldn't have been caught.
bob

Bob: thank you very much for the info! Could you
please review this cleanup patch?
I reduced MAX_IV_LENGTH to 24 as you suggested.
In ssl3_HandleRecord, I split the IV length check
into two checks. The first check detects an NSS
bug (if MAX_IV_LENGTH is too small), so it sets
SEC_ERROR_LIBRARY_FAILURE. The second check is
a decryption failure, so it sends a bad_record_mac
alert to the peer. The TLS RFC and current NSS
usage suggest that the decode_error alert applies
to messages rather than records.

Comment on attachment 605626[details][diff][review]
Reduce MAX_IV_LENGTH to 24; misc fixes
r+ OK, I see the context. This is a patch on top of 1.1 (I was trying to figure out why the the IV was in the protocol).
NOTE:Could we use MAX_IV_LENGTH in the ssl3SidKeys now as well?

Comment on attachment 605626[details][diff][review]
Reduce MAX_IV_LENGTH to 24; misc fixes
Bob: I like your suggestion of using MAX_IV_LENGTH in ssl3SidKeys.
When I tried to do that, I saw this comment in the ssl3SidKeys
structure:
typedef struct {
...
} ssl3SidKeys; /* 100 bytes */
I looked into it further and found that we carefully pad the
sidCacheEntry structure, which contains ssl3SidKeys, to a
multiple of the cache line size in sslsnce.c:
http://bonsai.mozilla.org/cvsblame.cgi?file=mozilla/security/nss/lib/ssl/sslsnce.c&rev=1.59&mark=151,159-162#146
So using MAX_IV_LENGTH in ssl3SidKeys may cause us to break that
performance optimization inadvertently if we change MAX_IV_LENGTH.
Sigh.
Patch checked in on the NSS trunk (NSS 3.13.4).
Checking in ssl3con.c;
/cvsroot/mozilla/security/nss/lib/ssl/ssl3con.c,v <-- ssl3con.c
new revision: 1.171; previous revision: 1.170
done
Checking in sslimpl.h;
/cvsroot/mozilla/security/nss/lib/ssl/sslimpl.h,v <-- sslimpl.h
new revision: 1.98; previous revision: 1.97
done

To my surprise, the IV members in ssl3SidKeys and SSLWrappedSymWrappingKey
are not used. I found that we have an assertion for the expected, carefully
padded size of sidCacheEntry. So it is safe to remove the unused members.

I edited Brian's patch and checked it in on the NSS trunk (NSS 3.13.5).
I omitted most of the "never implemented" comments because there are
many other cipher suites that NSS never implemented, and adding the
"never implemented" comments to some but not all of them could be
confusing.
I omitted the #ifdef PARANOID code because it is very inefficient.
ssl3_config_match_init() sets suite->isPresent for *all* cipher suites,
but we only need to set the isPresent field of the cipher suite in sid.
We should add a more lightweight version of ssl3_config_match_init()
for a single cipher suite.
Related to that, I kept the original code that optimizes the compression
method and cipher suite selection for session resumption (no need for
nested for loops). Although it is extra code, it makes session resumption
faster.
Checking in SSLerrs.h;
/cvsroot/mozilla/security/nss/lib/ssl/SSLerrs.h,v <-- SSLerrs.h
new revision: 1.20; previous revision: 1.19
done
Checking in ssl3con.c;
/cvsroot/mozilla/security/nss/lib/ssl/ssl3con.c,v <-- ssl3con.c
new revision: 1.175; previous revision: 1.174
done
Checking in sslerr.h;
/cvsroot/mozilla/security/nss/lib/ssl/sslerr.h,v <-- sslerr.h
new revision: 1.21; previous revision: 1.20
done

Comment on attachment 606054[details][diff][review]
Remove the unused IV members of ssl3SidKeys and SSLWrappedSymWrappingKey
r+ rrelyea
I don't believe different versions of NSS could possibly share cache entries (unless they both live in the same process, in that case you have a world of other problems). So this change should be fine.
bob

I can't believe this is still not fixed! Just because you are too lazy to move away from NSS, even though it's been nearly a year since BEAST broke the TLS1.0 protocol!
FIX IT!
notify me when it's fixed. and the 1.2 billion people who browse and pay for things online.

(In reply to syragon@gmail.com from comment #94)
> I can't believe this is still not fixed! Just because you are too lazy to
> move away from NSS, even though it's been nearly a year since BEAST broke
> the TLS1.0 protocol!
Not mention, that TLS 1.1 is standard from year 2006 and TLS 1.2 from 2008. ;)

wtc's away, I'm not sure what else has to be checked in.
For others, patience. There is a BEAST fix for SSL3 and TLS already. Until recently, there has been very little call for TLS 1.1. Now that there is it's actually being worked on.
Also note: this bug is for TLS 1.1 in NSS. The version of NSS with TLS 1.1 still needs to be integrated into mozilla once this is complete.

(In reply to Eric Rescorla from comment #98)
> Note also that ffox will need to decide whether to fall back when servers
> appear to be TLS 1.1 noncompliant. If you do, then this doesn't help that
> much.
Servers are ready for TLS 1.1 and 1.2. Firefox should warn about TLS 1.0 servers.
Support for TLS 1.1/1.2 in Firefox will be ready too late, when vulnerabilities of TLS 1.1/1.2 will be found.

If by "Servers are ready for TLS 1.1 and 1.2" you mean that you can get server software supports TLS 1.1 and 1.2, that's true. If by it you mean that big servers support it, that's not really true. I just tested openssl (which supports 1.2) against some big sites:
Amazon: TLS 1.0
Apple: TLS 1.0
Google: TLS 1.2
Microsoft: TLS 1.0
The notion that Firefox is going to warn or refuse to connect to Amazon seems like a non-starter.

TLS 1.0 is now not secure. Servers won't use TLS 1.1/1.2 because there is no browser to support it, and browsers won't support TLS 1.1/1.2 because there is no server to support it... so we keep using TLS 1.0 forever.

(In reply to lambdav from comment #101)
This really isn't an appropriate discussion for a bug, but since this is such
a hot button, I'm going to respond in-line so that other people reading
this bug can see the state of affairs.
> TLS 1.0 is now not secure.
This isn't correct. A Rizzo/Duong-style attack can only be mounted in a specific
range of circumstances, namely
(1) a CBC cipher suite w/o countermeasures
(2) a client-side technology that allows tightly controlled repeated cross-origin interactions
on the same SSL/TLS flow.
Firefox has countermeasures for TLS 1.0, and it's not clear that there are affected
client-side technologies that use NSS. Note that the Rizzo/Duong attack, as far
as I can tell, used a Java applet which (a) has its own SSL/TLS stack and (b) depended
on a cross-origin enforcement problem in Java. For some more details see:
http://www.educatedguesswork.org/2011/09/security_impact_of_the_rizzodu.htmlhttp://www.educatedguesswork.org/2011/11/rizzoduong_beast_countermeasur.html> Servers won't use TLS 1.1/1.2 because there is no
> browser to support it, and browsers won't support TLS 1.1/1.2 because there
> is no server to support it... so we keep using TLS 1.0 forever.
I'm not saying that Firefox shouldn't support TLS 1.1. I'm 100% in favor of
that. In fact, if you look at the history of this bug, you can see that I was one of
the people helping develop the patch and moreover that DTLS support depends
on this, so I'm doubly in favor of it.
However, what Bob says is correct:
(1) It's not an emergency.
(2) Once this bug is closed, we still need to deal with actually making Firefox
do TLS 1.1. (https://bugzilla.mozilla.org/show_bug.cgi?id=733647)
(3) When we do that, it's most likely that Firefox will at least temporarily fall
back to TLS 1.0 when presented with an apparently non-compliant server.
This negates defense against an active attacker, which is the kind
required to mount a Rizzo/duong style attack. This is actually a technical
question we need resolve before turning on TLS 1.1.
(4) Even if we *were* to refuse to connect to any apparently non-compliant
TLS 1.0 servers, there would be a huge number of TLS 1.0 servers still
extant and nominally vulnerable to this attack (modulo the conditions
I listed above.)

(In reply to Eric Rescorla from comment #100)
> The notion that Firefox is going to warn or refuse to connect to Amazon
> seems like a non-starter.
what would make sense is: Firefox keeps up a database of what was the last TLS version that a certain server supported and that a warning is issued when the TLS version is lower that it was last time that site was visited because in that case a MITM attack using a TLS version downgrade might be in progress.

In the discussion thread in the corresponding Red Hat bug, folks have wondered whether the nss test tools, (tstclnt and selfserv) will add options to disable SSL/TLS versions before 1.1, similar to the current -2 / -3 / -T ones. This to enable trying to connect with only TLS 1.1 enabled and tell you what protocol version and cipher is being used for the connection. So far I haven't seen any changes on the test tools or the on the ssl shell test script.

Actually it should be part of this bug (part of the feature is updating the tools/tests).
There is an unreviewed and unrequested for review patch to the tools in this bug (first patch currently) attached by Kuat.
bob

Is that patch ready for review? Comment 38 (latest mentioning) does not indicate a state for this patch, while earlier comments indicate that “it breaks existing ssl tests because '-T', '-2', '-3', '-S' options with this patch behave slightly different.” Can somebody familiar with the concerning code please check in which way it behaves differently and maybe even fix it (however, an analysis will be useful, too)?
Would it be helpful to break up this bug into several smaller ones?

I don't like the idea that we redefine the meaning of the old -2 -3 flags, because the existing tests might assume the existing behaviour of such scripts.
If we remove the old flags and replace them with a new API, we can be certain that any existing test gets rewritten with the intended meaning.
But I think this bug should be focused on the TLS 1.1 protocol implementation. I'll file separate bugs for "enhance tools to support TLS 1.1" and "adjust tests to cover TLS 1.1" and make them block this bug.

According to https://www.ssllabs.com/ssltest/analyze.html?d=dev.drrr.ushttps://dev.drrr.us/ is a valid test case.
Parity with other browsers:
- Chrome 22 loads it nicely, out of the box
- Opera 12.10 loads it only after explicitly enabling TLS 1.1
- IE9 loads it only after explicitly enabling TLS 1.1
Why would Opera and IE support it, but not have it enabled by default? Are they afraid of poorly implemented servers? Hopefully Chrome will weed out bad TLS 1.1 servers.

There are a whole bunch of broken software implementations out there, including servers from Microsoft. You will need to reconnect using an TLS 1.0 if using TLS 1.1 or 1.2 failed.
When openssl added support for TLS 1.1 and 1.2 we saw a whole bunch of problems trying to connect to various sites. This includes:
- Announcing you support TLS 1.0 to 1.2 in the ClientHello and the server just closes the connections or does something else weird. It's supposed to send a 1.0 ServerHello back if that's the highest it supports. You can find many examples of that on ssllabs under the "TLS version intolerance".
- Connections that hang when the ClientHello is bigger than 512 bytes. This is only a problem for openssl with TLS 1.2 because it has a large list of supported ciphers that it's sending.
Kurt

TLS 1.1 is now available in the new NSS 3.14 release.
Marked this NSS bug fixed.
Bug 733647 is about using TLS 1.1 in Gecko (Firefox, Thunderbird),
on by default. Almost all of the necessary workarounds are performed
by NSS. The only workaround that must be performed by an application
is the protocol version fallback when the handshake fails. (The
workarounds in NSS do not require protocol version fallback.)
I suggest voting for bug 733647 but please refrain from inundating
that bug with comments, so that developers can use that bug report
to plan for and discuss work easily. Thanks.

Status: ASSIGNED → RESOLVED

Last Resolved: 7 years ago

Resolution: --- → FIXED

You need to log in
before you can comment on or make changes to this bug.