Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

From David Dahl's weblog: "Good news! With a lot of hard work – I want to tip my hat to Ryan Sleevi at Google – the W3C Web Crypto API First Public Working Draft has been published.
If you have an interest in cryptography or DOM APIs and especially an interest in crypto-in-the-DOM, please read the draft and forward any commentary to the comments mailing list: public-webcrypto-comments@w3.org"
This should be helpful in implementing the Cryptocat vision. Features include a secure random number generator, key generation and management primitives, and cipher primitives. The use cases section suggests multi-factor auth, protected document exchange, and secure (from the) cloud storage: "When storing data with remote service providers, users may wish to protect the confidentiality of their documents and data prior to uploading them. The Web Cryptography API allows an application to have a user select a private or secret key, to either derive encryption keys from the selected key or to directly encrypt documents using this key, and then to upload the transformed/encrypted data to the service provider using existing APIs."
Update: 09/19 00:01 GMT by U L: daviddahlcommented: "I have built a working extension that provides 'window.mozCrypto', which does SHA2 hash, RSA keygen, public key crypto and RSA signature/verification, see: https://addons.mozilla.org/en-US/firefox/addon/domcrypt/ and source: https://github.com/daviddahl/domcrypt I plan on updating the extension once the Draft is more settled (after a first round of commentary & iteration)"

Chrome will probably put in an update which contains this when nobody's looking. Firefox will update two weeks after Chrome. And IE will take another two years, and their interface for it will be completely broken. Opera will have already had it implemented a month before everybody else, but nobody cares because nobody uses Opera.

We have Microsoft, Google and Mozilla all deeply involved in the Working Group. I expect this will be a "webkit" patch, and hopefully land in all webkit browsers. Some initial experimentation has been done by me in Gecko in bug 649154: https://bugzilla.mozilla.org/show_bug.cgi?id=649154 [mozilla.org]

Just wondering how would you authenticate yourself with your browser? A username password authentication? If not, what would happen if someone else used your browser and had access to everything of yours?

You will create keypairs and exchange public keys via a web app. Via the API, you will be able to create digital signatures to help with user verification.
This API is not being promoted as a silver bullet for security and privacy, however, when used in conjunction with other browser features like CSP ( http://www.w3.org/TR/CSP/ [w3.org] ) - and I imagine new browser features we still need to figure out (perhaps secure input and reading widgets), we hope to enable more secure web applications.
I want to underscore

Most users will like the convienence of a single password model, but this time that password never leaves the device you are using. Still at risk to keyloggers just like before.

You could if you wanted secure your own secret keyring using a mixture of methods, such as a combined smartcard, password and biometrics. The biometric code unlocks data on a smartcard, the smartcard provides part of the data to the browser and the password entered into the computer completes

No, The Cryptocat Vision statement explains it a lot better.
Basically it's for when your so paranoid that you fear even your cloud service app provider.
For example, you go and use Cloud Doc Editor and write some docs and save them locally...
But what about the remote server? What's it doing with that data? Is it making copies?
Could it know you write erotic fan-fic about Captain Picard having sex with Rainbow Dash?

Basically it's for when your so paranoid that you fear even your cloud service app provider.

Maybe. The W3C draft lists "Cloud Storage" as one of its use cases [w3.org], but remember that the app provider is also delivering the JavaScript that runs the decryption and loads up the DOM, so it could intercept the plaintext or decryption key if it wished. It doesn't protect against a malicious cloud service app provider, but it does make it easier for them to secure against data breeches (if their backups were stolen, for instance) and/or rely on 3rd party storage providers.

If the server does the encryption, then the server has to see the unencrypted content.

If the client does the encryption, the server doesn't have to see the unencrypted content.

Also, if the server does the work and you have a million clients, then the server has to do a million units of work rather than the clients each doing one unit of work. This can make the server more impacted by traffic spikes and provide a less-consistent and sometimes lower-quality user experience than just offloading that work to the client.

Alternatively, its more expensive (more CPU = more $$) for the server operator, who often owns the app. So there's an incentive to build apps in a way that the work is offloaded.

The server can't decrypt the page for you. That would eliminate the entire point of encrypting the page in the first place. The server encrypts the page and gives it to you, then your browser decrypts it using this interface specified by the W3C. Are you actually dumb, or is that just a hobby?

It was because NearlyFreeSpeech doesn't support HTTPS, and I wanted to implement some sort of encryption. So, I figured that my server could encrypt pagelets and send them, and then the client could use a previously-established key to decrypt the pagelets, attaching them to the DOM structure in a logical way. The problem is, since JavaScript explicitly disallows XSS, I couldn't figure out a way to contact a separate key authority server. This meant that however I did it, I'd be (more) vulnerable to a man-in-the-middle attack.

Looking this over, it looks like this specification doesn't solve that issue. I know that key authorities can be compromised, but it's better to require two points of failure rather than one.

The problem is, since JavaScript explicitly disallows XSS, I couldn't figure out a way to contact a separate key authority server.

JavaScript doesn't "explicitly disallow XSS". Most browsers (through implementations of the still-in-draft Content Security Policy, and, for IE, additionally through its own "XSS filter") include means of restricting XSS, but those browsers also allow pages to control whether and how those XSS-limiting features are applied.

Expanding on that JSONP mention to hopefully save someone a googling...

XMLHTTPRequest calls are subject to single-origin policy, so can't be used for XSS. However, SCRIPT tags don't have this restriction, even for tags that are dynamically created using JavaScript. JSONP is a trick that leverages this to implement XSS.

The main limitation is that JSONP can't be used to call non-JSONP web services. So changes to the third-party service may be needed in order to support JSONP.

Uh... JavaScript has allowed cross-site XHR for going on four years now. It does, however, require appropriate configuration on the server you're contacting. The bigger problem with that design is that if your web hosting server doesn't support HTTPS, how will the third-party server handing out authentication tokens set the token on the server side?

No, this is better handled through a DH key exchange. Then both sides have a shared symmetric key, and both sides can store it locally (with client-side stora

I was thinking about this originally in January of 2011, and I think I remember finding people mentioning XHR but not finding anything beyond scant mentions. No good "what is this and how do I do" documentation. I was originally thinking a DH key exchange, but that requires you to store it each session, which means each session is vulnerable to MitM, or to use HTML5 things that were not widespread two years ago.

Right. HTML5 local storage is a fairly recent addition. You might also have been able to use a cookie with the "secure" flag set, which means the cookie is sent only over HTTPS connections, but AFAIK can be accessed in JavaScript code locally. I'm not certain whether such cookies are accessible through JavaScript that arrived over unencrypted HTTP, though, so that might not work.

Regarding cross-origin XHR, it's pretty straightforward. It works just like regular XHR. The only difference is that the ser

Right. HTML5 local storage is a fairly recent addition. You might also have been able to use a cookie with the "secure" flag set, which means the cookie is sent only over HTTPS connections, but AFAIK can be accessed in JavaScript code locally. I'm not certain whether such cookies are accessible through JavaScript that arrived over unencrypted HTTP, though, so that might not work.

You're supposed to be able to mark a cookie as being unavailable to Javascript (well, as being only for use with HTTP connections; secure transport of the cookie is an orthogonal attribute) but that's both something I wouldn't rely on working and also easy to disrupt from JS; there's nothing to stop any cookie from being overwritten with something else. Cookies aren't designed for deep security.

How do they mitigate these inherent security problems of the JavaScript platform in the API draft? With XSS, I can always overwrite the browser's crypto API object, replacing it with a rogue implementation.

My understanding has been that JavaScript in its present form is not a viable platform for cryptography.

Actually I think having crypto as part of the browser is a bigger chance of success then just implementing the crypto in Javascript as some people clearly have already done. You don't want to implement a cryptographically secure pseudorandom number generator in Javascript it will never be secure.

CSP will be a huge help in reducing attack vectors. Another thing is the key material being unavailable in the DOM. Current JS libraries do not have the option of making all key references opaque and truly hiding the private and secret key material from the DOM. This spec allows the browser to only ever reference key IDs instead of the actual key material.

This. Providing proper crypto primitives in the JS standard library is a good thing, I suppose, but it doesn't solve any (and I do mean any) of the underlying problems with things like CryptoCat. CC actually had quite good crypto primitives (implemented from scrach in JS, but apparently implemented well).

The problem was that every time a user wanted to use CC, they had to download the page (and its JS) from the CC server... and there are so many ways to attack that. An obvious one is to insert a backdoor in

Thats the point of this API I imagine. If it is included in the browser then there is nothing to intercept and replace. Also it can have some priveledged status where methods can't be overwritten by other scripts.

Um... no. No part of any of the attacks I described requires any interception or replacement of the crypto code (I thought I made that clear). You're still going to have to serve a webpage though. In fact, in order to use these new crypto functions, you're going to have to serve script as well.

I (the attacker, whether via MitM or server control or some other mechanism) can modify that to my heart's content. I don't even have to modify the existing scripts; just inject my own that captures every keystroke se

The API has two padding modes for RSA, PKCS#1v1.5 and OAEP. OAEP is provably secure. That is, if the underlying scheme (RSA) is a secure public key cipher, then RSA combined with OAEP is a semantically secure encryption scheme that is resistant to chosen-plaintext attacks. On the other hand, not only is PKCS#1v1.5 not provably secure, it has been known [jhu.edu] for years [cryptograp...eering.com] to be vulnerable to real world attacks.

Most of the time when you see people using it today it is for backwards compatibility, but in this case they are designing a brand new API. Why not go with the one which we know to be secure instead of encouraging the use of a dangerously vulnerable scheme?

... it's a bunch of random thoughts. Most of the current "draft" consists of "TBD" or "here are some ideas that need to be fleshed out". This looks like it's years from reality, at which point it'll have turned into another CDSA-sized monstrosity containing the union of every feature requested by every vendor ever.

I have built a working extension that provides 'window.mozCrypto', which does SHA2 hash, RSA keygen, public key crypto and RSA signature/verification,

No offence, but that's about a hundredth of what SSLeay (the thing that came before OpenSSL) was doing 15 years ago. That's a long, long way to go before you have a general-purpose crypto API usable by web developers.