Maybe I am missing something obvious, but that second answer is pretty awful IMO.

What prevents a malicious script from just copying the public key instead of the password? Once they have the public key they can sign away and pretend to be the "official" client when communicating with the server.

Maybe this is storing the public key using some javascript kung-fu that I am unaware of...

Maybe I am missing something obvious, but that second answer is pretty awful IMO.

What prevents a malicious script from just copying the public key instead of the password? Once they have the public key they can sign away and pretend to be the "official" client when communicating with the server.

Maybe this is storing the public key using some javascript kung-fu that I am unaware of...

Yeah, I don't see any benefit of the second option over a simple session key. Both can be hacked by malicious clients, but (thankfully) there's not much that can be done about that other than locked-down hardware.

Yeah, I don't see any benefit of the second option over a simple session key.

I think the goal of a scheme like the one in that second answer is to try to protect the client from a third party attacker, not to protect the server from the client. The public key is given to the client once and never again goes out over the wire, where a session key is sent with every http request.

The only way that improves security is in the case that the authentication and public key handoff is done over https but the rest of the client/server interactions are over http.

Maybe I'm missing the value in all the answers, but wouldn't it be simplest to SHOW a snippet of javascript that could be run on an already loaded page by pasting into a bookmark or directly into the URL bar?

This would show him HOW easy such "hacks" can be.

My simple example would be the following javascript snippet

javascript:void(document.body.style.background="black");

If you paste the above into a bookmark (the location the bookmark is pointing to) then whenever you click that bookmark it executes this javascript code which changes the currently viewed site's background to black. Go ahead and try it on Google's home page for a kick. This technique is called a bookmarklet (as in bookmark and applet squished together).

Sidenote: pasting this directly into the URL bar will not work because it would need to be wrapped in a function to have access to the document object I believe, but really that's just details.

Bookmarklets are extremely useful and powerful, but also show how easy it is to modify whats going on in the client code. Ever experience a site that has a timer? Please wait X seconds then click continue? If its done in Javascript, it becomes trivial to short circuit the timer or just enable the continue button with a click of a bookmarklet so you do not have to wait.

The same concept applies when there is ANYTHING secure or critical stored in a javascript variable.

Bookmarklets can be handy and useful as well. The best example I have of a time-saving bookmarklet is to have you imagine if you had a web UI that required testing. The page has a form with dozens of fields and you need to repeatedly fill in the same values. Write a bookmarklet containing javascript code to fill in all the values of the text boxes and submit the form for you. Then it becomes a 1 click solution to test the page with a given set of values.

It's also possible to use this approach to sign in to sites by setting the username and password box to your username and password, but it's highly insecure because all people would need to do in order to see your plain text password is inspect your bookmark. However that's about as secure as the Chrome password security is, it turns out, so to each their own I guess. (Source: http://www.theguardian.com/technology/2 ... urity-flaw).

Also, this is actually a manner in which you can defeat OTHER automated password programs. Simply write a bookmarklet to capture the value of the password text box, or change the type of the textbox from "password" (which will show dots) to "text" which would show the auto-filled password in plain text.

Anyway, with an answer like this, not only have they seen how easy it is to mess with running javascript code on the client end, but its taught him HOW exactly it can be accomplished. It also informs on why such a technique could be useful. IMO that beats out any pure theory discussion.

Yeah, I don't see any benefit of the second option over a simple session key.

I think the goal of a scheme like the one in that second answer is to try to protect the client from a third party attacker, not to protect the server from the client. The public key is given to the client once and never again goes out over the wire, where a session key is sent with every http request.

The only way that improves security is in the case that the authentication and public key handoff is done over https but the rest of the client/server interactions are over http.

That seems pretty silly though. If the data really needs to be secure, it should be encrypted both ways. Not to mention SSL is basically free these days once the handshake is complete.

Yeah, I don't see any benefit of the second option over a simple session key.

I think the goal of a scheme like the one in that second answer is to try to protect the client from a third party attacker, not to protect the server from the client. The public key is given to the client once and never again goes out over the wire, where a session key is sent with every http request.

The only way that improves security is in the case that the authentication and public key handoff is done over https but the rest of the client/server interactions are over http.

That seems pretty silly though. If the data really needs to be secure, it should be encrypted both ways. Not to mention SSL is basically free these days once the handshake is complete.

Yeah, I don't see any benefit of the second option over a simple session key.

I think the goal of a scheme like the one in that second answer is to try to protect the client from a third party attacker, not to protect the server from the client. The public key is given to the client once and never again goes out over the wire, where a session key is sent with every http request.

The only way that improves security is in the case that the authentication and public key handoff is done over https but the rest of the client/server interactions are over http.

That seems pretty silly though. If the data really needs to be secure, it should be encrypted both ways. Not to mention SSL is basically free these days once the handshake is complete.

It still bypasses proxy caching, SSL always have a cost.

True for things like images or static content in general this can be costly, which is why there are clever ways to get around it (load static content insecurely and then check checksum with secure JS). But things like SOAP calls and co generally can't be cached I'd think so it mostly comes down to a bit higher CPU utilization for both client and server - shouldn't be too bad these days.

Maybe I am missing something obvious, but that second answer is pretty awful IMO.

What prevents a malicious script from just copying the public key instead of the password? Once they have the public key they can sign away and pretend to be the "official" client when communicating with the server.

Maybe this is storing the public key using some javascript kung-fu that I am unaware of...

You're missing a basic understanding of crypto.

The public key is called a public key because it's public. If the hackers know the public key they still can't do anything.

The fact javascript is open to client side modification is meaningless. A competent hacker can modify your compiled C or C# code with just as much ease. There are ways to make their modifications useless and they do work in JavaScript.

My answer, as someone who knows a bit of crypto and javascript, is that you can do something like this but it is an area of crypto that should only be attemted by someone who really understands security and it's going to take months if research to get right. So you're not going to get a full description of how to do it on a stack overflow post, only a theoretical summary if how it could be done.

Why on Earth would anyone want to hide client-side routes behind meaningful authentication in the first place? I mean, I could see it for a game with various levels that allows users to log in and pick up where they left off, but that's pretty low-stakes "security." But I honestly can't think of a scenario where it would even begin to make sense to premptively send sensitive data to a client who might or might not be authorized to see it and then try to restrict their access to that data through JavaScript that they might or might not execute.

Why on Earth would anyone want to hide client-side routes behind meaningful authentication in the first place? I mean, I could see it for a game with various levels that allows users to log in and pick up where they left off, but that's pretty low-stakes "security." But I honestly can't think of a scenario where it would even begin to make sense to premptively send sensitive data to a client who might or might not be authorized to see it and then try to restrict their access to that data through JavaScript that they might or might not execute.

True, it's pretty naive, but I'm pretty sure I know what was going through his head. Obviously I lack details, and most of this is informed conjecture, but I figure what he was trying to do was make a "web app". He is working on a javascript application that loads from a server once during login. He wants to minimize (or eliminate) additional server calls once the application starts. Parts of the application are only available to certain users (probably premium users or the like). Thus, he wants a means of reliably locking out those parts of the application to certain users, without making additional server calls, and with the caveat that all data is sent up front.

While the answers are correct in that once data is in the hands of the user, it should be assumed that they can access it, there may be a workaround in this specific case. Any client side routing system can easily be bypassed, of course. However, if the data they access is encrypted, then accessing it is virtually useless. So, it might make sense to store the protected data in an encrypted format (probably procedurally at request time, using a unique key for each user), have the client make a small server call on login to get the key to decrypt said data (only give the key to users with access, of course), and when the user accesses a secure route, the code will try to decrypt the endpoint data using the key it received at login. In this case, without the key, bypassing the routing system is meaningless (unless you're prepared to break the encryption), and with the key, doing so is pointless (as you can already trivially access the data within the system provided).

Any client side routing system can easily be bypassed, of course. However, if the data they access is encrypted, then accessing it is virtually useless. So, it might make sense to store the protected data in an encrypted format (probably procedurally at request time, using a unique key for each user), have the client make a small server call on login to get the key to decrypt said data (only give the key to users with access, of course), and when the user accesses a secure route, the code will try to decrypt the endpoint data using the key it received at login. In this case, without the key, bypassing the routing system is meaningless (unless you're prepared to break the encryption), and with the key, doing so is pointless (as you can already trivially access the data within the system provided).

Let's think that through for a second:

Your proposed system basically works like this: server sends encrypted data to client (1), client can then authenticate itself to the server for the key to decrypt (2) the data. As you'll notice the server has to send you 2 data packages.

Now the obvious implementation is as follows: User authenticates herself to server and asks for data (1). Done. As you'll notice you just saved yourself one round trip and the cost of encrypting/decrypting the data with exactly the same result.

And sure you can store the encrypted data offline, but what does that give you? The client only has to access your service once to get the key to decrypt it (same as sending it unencrypted in the first place).

Any client side routing system can easily be bypassed, of course. However, if the data they access is encrypted, then accessing it is virtually useless. So, it might make sense to store the protected data in an encrypted format (probably procedurally at request time, using a unique key for each user), have the client make a small server call on login to get the key to decrypt said data (only give the key to users with access, of course), and when the user accesses a secure route, the code will try to decrypt the endpoint data using the key it received at login. In this case, without the key, bypassing the routing system is meaningless (unless you're prepared to break the encryption), and with the key, doing so is pointless (as you can already trivially access the data within the system provided).

Let's think that through for a second:

Your proposed system basically works like this: server sends encrypted data to client (1), client can then authenticate itself to the server for the key to decrypt (2) the data. As you'll notice the server has to send you 2 data packages.

Now the obvious implementation is as follows: User authenticates herself to server and asks for data (1). Done. As you'll notice you just saved yourself one round trip and the cost of encrypting/decrypting the data with exactly the same result.

And sure you can store the encrypted data offline, but what does that give you? The client only has to access your service once to get the key to decrypt it (same as sending it unencrypted in the first place).

It's not how I would design it, but the original question made it fairly clear that all of the data was being sent at startup, not just the data that the user needed. It's possible that a caching server or the like is sending out the application en masse. My implementation allows for separate authentication and content delivery servers; there is no other benefit.

If the goal is to have the client authenticate itself before decrypting local data, for example for the free and premium versions of an app, then it seems like a better idea to partition the data into 2 sets -- send everyone the free data, only send the premium data to premium buyers.

If it's a browser and not a packaged app, besides the problems already listed: with browsers you can use developer tools and plugins to inject new javascript into a document or to modify existing script code.

If the goal is to have the client authenticate itself before decrypting local data, for example for the free and premium versions of an app, then it seems like a better idea to partition the data into 2 sets -- send everyone the free data, only send the premium data to premium buyers.

If it's a browser and not a packaged app, besides the problems already listed: with browsers you can use developer tools and plugins to inject new javascript into a document or to modify existing script code.

As for the second paragraph, injecting new javascript is useless if you don't have the decryption key.

As for the first, you're correct. But then, I didn't say the person who asked the question had a good design. In fact, I called them naive. I was just trying to come up with a solution in the constraints provided.

And sure you can store the encrypted data offline, but what does that give you? The client only has to access your service once to get the key to decrypt it (same as sending it unencrypted in the first place).

The key might only be 2KB, while the encrypted data could be hundreds of megabytes (eg: game textures and sounds).

Maybe the user could download the website once over wifi, and then use it for years without ever establishing an internet connection.

Or maybe you need an internet connection to log in but only the credentials are passed over, reducing bandwidth usage and allowing an app that was impossible before to suddenly become possible.

There are plenty of reasons why you would want to verify someone's account client side instead of server side.

And sure you can store the encrypted data offline, but what does that give you? The client only has to access your service once to get the key to decrypt it (same as sending it unencrypted in the first place).

The key might only be 2KB, while the encrypted data could be hundreds of megabytes (eg: game textures and sounds).

Maybe the user could download the website once over wifi, and then use it for years without ever establishing an internet connection.

In both cases you only have to download the data once. Because there's *no* difference between storing the encrypted or unencrypted on the client side - the client can get to the actual data in both cases. Sure you could let an unauthorized user download hundreds of MB and then deny her the key.. or you could just not give her the data in the first place.

It's not how I would design it, but the original question made it fairly clear that all of the data was being sent at startup, not just the data that the user needed. It's possible that a caching server or the like is sending out the application en masse. My implementation allows for separate authentication and content delivery servers; there is no other benefit.

That's true and may be beneficial in some situations, not going to deny that possibility.

It can be done... the only problem is the time element. If it's a short lived secret on the order of minutes you are fine, but it's only a matter of time before someone figures out what's going on and can abuse it. Might never happen, I know systems that have gone for years without any hassles, but a lot of that was not based on technology but a trust element between the user and the service, and next to zero value to be gained in breaking the secret. But if there was a way to do these things client side, you'd be very rich indeed. And quite frankly I hope someone does figure it out someday...

Never trust anything from a client. You do not control the client. You do not know what the client will do with the code/js/suggestions you've provided.Clients can be anything and claim to be something completely different.Never trust anything from a client.

If the goal is to have the client authenticate itself before decrypting local data, for example for the free and premium versions of an app, then it seems like a better idea to partition the data into 2 sets -- send everyone the free data, only send the premium data to premium buyers.

If it's a browser and not a packaged app, besides the problems already listed: with browsers you can use developer tools and plugins to inject new javascript into a document or to modify existing script code.

As for the second paragraph, injecting new javascript is useless if you don't have the decryption key.

Getting the decryption key is trivial if you can inject new javascript at will.

What is usually referred to as a MITM is an attack on HTTPS (and other certificated protocols). If you aren't using HTTPS, then the man in the middle is just any network node and it's not really an "attack." A more accurate statement would be: "Assume HTTPS protocol to make it less likely someone listens in." Bringing up MITM just acknowledges that HTTPS can't really guarantee this prevention you seek.

If the goal is to have the client authenticate itself before decrypting local data, for example for the free and premium versions of an app, then it seems like a better idea to partition the data into 2 sets -- send everyone the free data, only send the premium data to premium buyers.

If it's a browser and not a packaged app, besides the problems already listed: with browsers you can use developer tools and plugins to inject new javascript into a document or to modify existing script code.

As for the second paragraph, injecting new javascript is useless if you don't have the decryption key.

Getting the decryption key is trivial if you can inject new javascript at will.

That's only true if the decryption key is stored in the code sent by the server. If you have to authenticate to a server to receive the decryption key, then only authenticated users will be able to access protected data. If the key is never sent to you (but the encrypted data is), then hacking at client side JS won't get you anywhere.

Still, there's probably better ways to do whatever it is that the original question asker was trying to do, but without an explicit scenario, I can only try to provide a solution within the given constraints.

Also, (without client certs) SSL protects exactly against MITM attacks (to ProfessorGuy: it's generally still called a MITM attack even if no SSL is used - there [may be] MITM attacks on HTTPS (say, faulty certificate checking) but there are also MITM attacks web applications - say, if they send the login form in the clear or if they trust a cookie or something).

Of course, you should generally never trust a client - but the client can (of course) trust itself. If the client already knows something, it does not need to ask the server whether it is allowed to know it! If the client is displaying the predicted result of an operation using the information it already has.

If you don't want to keep passwords in-memory, have login put a session key to a cookie, and to protect against CSRF make each page you don't want to be hotlinked into require a HMAC of the page along with a counter (and send it to the page, relying on HTTPS against MITM attacks) like this:

Note that this prevents bookmarking of internal pages. Enabling bookmarking adds some risks. However, not calling verify_address on an unframeable page [X-Frame-Options] with no actions or private content should be OK (use transform_link on every link to a SENSITIVE page). Note that if you have potentially-hostile subdomains an attacker could force a user to be logged in as an account he owns - which may be bad if, say, the user sends private messages using your site.

Generally, Javascript Cryptography is quite useless - the Javascript code already trusts the server so all crypto can be generally delegated to the server. However there are a few exceptions: 1) You have a simple, trusted server and want to keep crypto code off it (say, to reduce computational load). 2) Your website is served from one server but interacts with others. A simple example is sending a small stub script from a trusted server that requests a big script from a CDN then verifies its hash/signature to a hash/key sent by the trusted server. For example https://arstechnica.com is a secure server that sends a small html containing just the article and the hashes of the css/js and then you XHR the (large) css and javascript parts from akamai.net, verify their hashes and eval them. In this case even if akamai.net is compromised it can't steal your users' passwords. 3) You want to use HTTP for cacheability of big scripts/images but not forego integrity - the last example does this one too.

Never trust anything from a client. You do not control the client. You do not know what the client will do with the code/js/suggestions you've provided.Clients can be anything and claim to be something completely different.Never trust anything from a client.