Posted
by
Soulskillon Wednesday July 03, 2013 @05:13AM
from the making-it-more-painful-to-be-bad-at-security dept.

tsamsoniw writes "California Attorney Kamala Harris says her office will start cracking down on companies in the Golden State that don't encrypt customer data and fall victim to data breaches; she's also calling on the state to pass a law requiring companies to use encryption. That's just one of the recommendations in the state's newly released data breach report, which says 131 companies in California suffered data breaches in 2012, affecting 2.5 million residents."

One thing: Encrption of laptop drives and external usb/harddisk is usefule against stupid loss/theft . Encryption of company servers is only buring cpu cycles, since the key is available to users that have to use ist

So instead of burning cpu cycles, you are burning crypto processor cycles plus you have the cost of buying the hardware in the first place and possibly the bus overhead of sending data to/from the device.

If the server gets compromised while its running, the data is accessible because the server needs access to the data in order to function.

If the server gets physically stolen its likely the crypto hardware will be stolen with it. If you store the key somewhere it can be automatically obtained and used then the key can be stolen too, if you enter the key manually on bootup (ie how you would on a laptop) then you require physical intervention if the server reboots for any reason.

Encryption has its uses, but its not a magic bullet, and poor/inappropriate use of encryption is damaging - not only does it waste resources unnecessarily, but it also brings a false sense of security and encourages lazy thinking... People will simply implement the bare minimum required to comply with the law, which will probably mean encrypting the data while leaving the key on the same box.

You will also end up with a "one size fits all" attitude, which is clearly ridiculous...You need to consider *what* data your storing, *why* your storing it and *what* needs to access it.

You can segregate the data so that some is only accessible by those systems that need it.You can tokenize the data, eg for repeat billing of a credit card you can store a token agreed only between you and your payment processor.You can store rarely referenced data with public/private keys, leaving only the public key online and keeping the private offline for use when necessary.

No, pushing a one size fits all "encrypt your data" mandate is stupid and will only make things worse, each individual case needs to be designed by someone who understands the needs and is technically competent.

While you are correct about the impact of anything currently running on the server, you are dead wrong about physical theft. An HSM should be hardened against picking the key out of it and should actually destroy the key if tampering is detected. Encryption on the server is still of limited benefit since the data key could probably be abused in most remote exploits on a running system, but for powered down security, such as physical breach, it is very significant, even if the chances of someone breaking

Security:-Something you know (password)-Something you have (HSM? hardware with key).-Something you are (biometrics, or gummy bear)

You steal the server, you steal the HSM. It is like requiring a hardware token with a laptop, and then storing the token with the laptop. A HSM does have it uses, but it is again key management that is the trick.

The words tamper resistant HSM sombdy uses.. realize that it is like, kill the data, so nobody can steal it. NOT always the best scenario:):'(

You are mistaken. Security is not something you know, have or are. That's authentication. HSM has nothing to do with authentication. It is key management and secure storage. Your understanding of how an HSM is used is also mistaken. The idea with an HSM is that it does all encryption and decryption operations without ever releasing the key and takes care of requiring proper authorization before performing decryption operations.

When initially configuring an HSM, a key should be created and backed up in

Only in this case, convenience wins out...Needing a staff member to physical intervene in order to boot the server is far too much of an inconvenience, so it will be configured either to obtain the key from the HSM automatically on boot (and thus the attackers could do the same), or it will be a network based system as you mention in which case when you steal the server you need to steal the key server too.

Even if you do have someone physically enter the key, you have the added inconvenience of managing who

You don't physically enter the key, you physically enter credentials that activate the HSM. Even if you have the ability to activate the HSM, getting the key out is (near) impossible. It is limited to doing decryptions with whatever restrictions are on the data (for example, you could require that user password be entered to access user data if the system stores data accessed by user accounts.)

Also, even if you do have to use a network based device, it means that they have to either a) steal the networked

In which case you're no longer relying on encryption, your now relying on obfuscation provided by the HSM... Just takes someone with the right skills/equipment to crack it, and once one person works it out they can provide details of the hack to others.

I don't disagree that it is not relying on the encryption exclusively. You have to trust the HSM to do it's job correctly. It's a little more than obfuscation though as it is an independent, hardened system with limited I/O, intrusion detection and a hair trigger for self destruction. It may be possible to still extract the key, but there would be a fair degree of luck involved and there is no redo button if you make a mistake trying to extract it. That's a fair bit better than simple obscurity, particu

Put another way, calling an HSM security by obscurity is a bit like saying that having a server protected by armed guards 24/7 with a block of C4 strapped to it inside the basement of the Pentagon is security through obscurity, since, if someone knew every security measure and was very, very lucky, they might be able to make it through everything.

For that matter, by the same token, encryption itself is security through obscurity since there might be some technology or math trick out there that can decrypt i

Dude, we steal the server, hem and all, set it up in our lab, then we have all the time in the world to try bus based exploits.

So? Signatures happen on the HSM, which also stores the key material; only the cleartext data going in and the signatures going out are on the bus.

And if you mess up in the lab, the HSM kills its keystore and game-over. (Or, if it doesn't, the folks on the other side were insufficiently paranoid / excessively cheap and it's Their Own Damned Fault).

I think you have some misconceptions about the CPU cycles involved in encryption. It's basically free. It's just a few clock cycles per byte.

The part everyone is concerned about is key stretching, where a CPU needs to do about half a second worth of processing to hash a password. There is simply no reason to do key stretching on the server. That's a dumb architecture. Instead, make the clients do it. By default, Microsoft does the key stretching on the server, and it's only for about a millisecond, if

Yea, I work in the security industry and I don't really agree. I hear what you're saying about considering each application and you're not wrong, but I think the potential benefits of this easily outweigh the negatives. It will apply pressure to companies who really do need to encrypt their data and just cannot get the will from the business to do it.

Its not a magic bullet, but especially in the absence of any legitimate way to wipe data from databases in a secure manner it's a reasonable compensating con

Yea, I work in the security industry and I don't really agree. I hear what you're saying about considering each application and you're not wrong, but I think the potential benefits of this easily outweigh the negatives. It will apply pressure to companies who really do need to encrypt their data and just cannot get the will from the business to do it.

No, it won't. It will cause every non-corporate-run website run by individuals within the state of California to shut down because of the inability to pay the

The big problem is that the database uses a shared hosting plan and a shared database server run by my ISP. I have no control over whether the database is encrypted on disk or in transit between the shared hosting server and the database server.

You're freaking out over nothing. Hosting providers are not going to leave people high and dry. Actually, it would be nice if they started encrypting their databases. Shared hosting will live on and solutions will be generated.

In order to add that protection, I would have to crank my hosting plan up to a dedicated server at a monthly cost that is equivalent to several years on my current hosting plan and buy a multi-subdomain SSL cert that also costs (annually) as much as several years worth of service.

You're being extremely, extremely silly. SSL certs can be had for next to nothing. Do they provide as much assurance as better certs? No, but they encrypt the traffic and the root cert is trusted by common platforms. Depending on the law you could use self signed certs as well.

It is a shame that even popular open source projects dont bother. For example Mozillas Thunderbird chat has no OTR support, Mozilla Firefox and other browsers treats self encrypted certs as WORSE than unencrypted and put big scary messages up. Instead there really should be three different "modes" - what exists now in all browsers for the certificate authorities for those who want to talk to their banks etc, self signed certs which should just work like an unencrypted link (no full secure icon etc), then pl

Mozilla Firefox and other browsers treats self encrypted certs as WORSE than unencrypted and put big scary messages up

I think it is reasonable action for a certificate you don't know the source. You can always add the certificate to your browser [mozilla.org] and avoid the error. The rationale for the pop-up is that an unknown self-signed certificate is as bad as no encryption - totally open to a main-in-the-middle attack, but people have a higher expectation of security from SSL.

Is "as bad as no encryption" a reason for yelling on the user and presenting it like the worst security problem ever? Even if I accept the premise that it is as bad as no encryption, the obvious conclusion is that the browser should present it the same as no encryption.

Actually, it is not as bad. It still keeps you safe from passive attacks (like your ISP collecting all data for a three-letter agency, which analyses them later).

Well it depends on what you are doing. Using your own private service over the internet secured by SSL, no big deal, register the cert. Using your online banking and the cert is self signed, better check on that. The reason is that no encryption is clear that it is unsafe and most people will (hopefully) not do anything sensitive. But putting trust in a self signed certificate is a gamble, especially when you assume that this SSL connection is being used to transfer secure data. The reason why it is conside

That is an "all or none" argument. If self signed certs look feel and behave the same as what unencrypted does now, then people have no reason to behave differently than they do with unencrypted. Sadly and as numerous researchers have shown (like this one - "Crying Wolf: An Empirical Study of SSL Warning Effectiveness" [psu.edu]) people quite happily transfer secure data over unencrypted connections in the current setup anyway. This further undermines your argument and the rational that treating self signed certs

It isn't the same as no encryption. The site is making a claim that cannot be verified, and which often points to fraud. Treating it as unencrypted would open up all sorts of man-in-the-middle attacks by criminals, ISPs and three-letter agencies quietly intercepting and replacing security certificates. Do you think people check for HTTPS and a valid cert every time they connect to their bank or email account?

I expect the browser to clearly inform the user whether the connection is safe (HTTPS with a verified certificate) or unsafe (either plain HTTP, or HTTPS with an unknown certificate). I also expect the user to check that a connection to his bank is reported as safe.
If you are interested in preventing attacks against careless users, the browser might also notify the user that a site previously known to have a safe connection, no longer has one. However, I do not think this is of much help: many users just

No it isnt that same, it is better. Unencrypted is already open all sorts of man-in-the-middle attacks by criminals, ISPs and three-letter agencies are already "quietly intercepting", and recording EVERYONES traffic. Making them go one step further an have to target individuals in order to replace a deluge of self signed security certificates is a big positive step. Also If self signed certs are never blessed with security icon by default then people will not fall for fraudulent fakes - because the brows

I think part of the rationale is that a self-signed certificate very well might be a sign that you're the victim of a man-in-the-middle attack, and it needs to be treated as a serious potential threat.

Personally, I don't think the problem is that web browsers treat self-signed certs as dangerous. I think the real problem is that the only infrastructure we have for authenticating certificates is expensive and unwieldy. We need to have a way of generating and authenticating certificates that's easy enough

I think part of the rationale is that a self-signed certificate very well might be a sign that you're the victim of a man-in-the-middle attack, and it needs to be treated as a serious potential threat.

This sounds good in theory, but the reality is that self-signed certificates (or those signed by an authority your browser does not recognize) are several orders of magnitude more common than MiTM attacks.

Otherwise, I agree that a big part of the problem is unusable UI for managing certificates in almost all existing browsers.

I think the GPs point was that it does not have to be a all or none - that you can have SSL of a self signed cert without the error message and without giving any "expectation of [high] security" (to quote GP "no full secure icon")

The rationale for the pop-up is that an unknown self-signed certificate is as bad as no encryption

In light of the Snowden revelations and subsequent fallout, this rational has very few legs to stand on. Unencrypted is less desirable than plain text. The only argument I have seen against this rational is that people may be lulled into a false sense of security if they believe self signed certs are as secure as CA issued ones, falling for MITM attacks for their bank traffic etc. The counter to that is that is simple and sensible: no, not if the browser does not try to tell them they have a top secure connection - and treats it like it is a plain text connection.

self-signed certificate is... totally open to a main-in-the-middle attack

The current SSL system is also totally open to a main-in-the-middle attacks by state sponsors, as has been reported here various times. And yes self signed certs are also very vulnerable to the same attack - but the point here is to encrypt the majority of data. State sponsers can always target but with blanket always on encryption they are unable to perform mass illegal capture and storage.... that is the point of not raising an error message on self signed certs.

Any way I cut these arguments, browsers appear to be in the wrong on this one - throw in cosy relationships with CAs, state departments etc and we could have a conspiracy here.

The current SSL system is also totally open to a main-in-the-middle attacks by state sponsors, as has been reported here various times. And yes self signed certs are also very vulnerable to the same attack - but the point here is to encrypt the majority of data.

They're not vulnerable to "the same attack". One attack requires hacking a CA or exerting very substantial influence over them. The other doesn't. The set of malicious actors who can -- and do -- MitM you if you use self-signed certs is much, much larger than the set of actors who can do it if you use CA-signed certs.

The only argument I have seen against this rational is that people may be lulled into a false sense of security if they believe self signed certs are as secure as CA issued ones, falling for MITM attacks for their bank traffic etc. The counter to that is that is simple and sensible: no, not if the browser does not try to tell them they have a top secure connection - and treats it like it is a plain text connection.

Yes, that's pretty much the argument. The danger is that they could think that their connection is somehow more secure than plaintext. You cannot safely fix this without determining user intent

The danger is that they could think that their connection is somehow more secure than plaintext.

It is a danger *only* if the browser is giving some indication of security. If the browser does not give any indication or expectation of privacy with self signed certs then there is no danger. Most browsers already do not show the protocol being used for plaintext (no http// display).

You cannot safely fix this without determining user intent, and even the user can't usually be trusted to determine their intent.

You can safely fix it by not giving any change to normal unencrypted experience. If they intended to use HTTPS to get real security but instead were presented with a self-signed certificate, and the browser defaulted into plai

If they intended to use HTTPS to get real security but instead were presented with a self-signed certificate, and the browser defaulted into plain text view (no ssl icon or indication of security) then the user does not need any extra warning.

When I make a request to a https url I expect the information contained within that request (parts of the url other than the hostname, post data if any, cookies if any) to be sent over an encrypted and authenticated link. By the time I can "look for the padlock" the potentially private information has already been sent. So if the connection cannot be authenticated the browser MUST warn me* BEFORE it continues with the request.

I support systems that allow encrypted but unauthenticated connections to be prese

You are right about https redirects, my mistake thanks for the correction. They are just so common now it appears to be default browser behavior.

When I make a request to a https url I expect the information contained within that request (parts of the url other than the hostname, post data if any, cookies if any) to be sent over an encrypted and authenticated link. By the time I can "look for the padlock" the potentially private information has already been sent. So if the connection cannot be authenticated the browser MUST warn me* BEFORE it continues with the request.

It sounds to me like invoking a very special, very peculiar and rare case to support the current status quo: That of communication of private data during initial handshake. How can a user be sending private information (credit card info in form post data for example) with an expectation of privacy on their part if they have not even accessed the webpage, ever, yet?

Is there a list of CAs that have been compromised, including evidence. I.E. it would post two signed and valid certificates for google.com for the same time period but one of them with obviously the wrong IP address?

Dude... companies do this all the time, if for no other reason than to compress network traffic. They just buy boxes like this one [sourcefire.com]. All you do is override DNS and CA. It's standard practice.

With DNSSEC we should be able to publish and verify certificate information via signed dns records, which would also shift the root of the trust relationship up to the dns registrars. And since the authentication part of CA certificates is tied to dns already, I don't see that this would change much.

"I think the GPs point was that it does not have to be a all or none - that you can have SSL of a self signed cert without the error message and without giving any "expectation of [high] security" (to quote GP "no full secure icon")"

Can you, really? I mean, we have a big enough problem with training users to type credentials in a login box served by http://www.myfavoritebank.com/ [myfavoritebank.com] all insecure-like. This area where security intersects user interaction design is a tricksy one.

On the other hand, a self signed certificate which you have explicitly accepted is in many cases *BETTER* than a ca verified cert. In the former case you have explicitly chosen to trust a single party, whereas in the latter you are reliant on a large number of organisations.

On the other hand, a self signed certificate which you have explicitly accepted is in many cases *BETTER* than a ca verified cert. In the former case you have explicitly chosen to trust a single party, whereas in the latter you are reliant on a large number of organisations.

A self-signed certificate is better only if you can independently verify that you've got the correct certificate and that it is still valid. Otherwise it is worse, because you've got no way at all to figure out if it is correct and whether it has not been rescinded yet (e.g., because of a break-in on the server). You're far better off to have a private CA run by someone you trust and to explicitly only trust that CA to issue for a particular service, rather than some random other CA. (The downside? That doe

True. If a corporate cert goes out of date then the warnings pop up and it's a bit confusing at times to figure out how to proceed. Ie, the choice in Firefox between "I know what I'm doing" and "get me out of here" certainly doesn't instill confidence when trying to add the cert.

We have reached the point in time where attorneys general have realized that companies need to encrypt customer data? Either that happened faster than I expected or I'm getting old faster than I realized.

I'm modern! We had Apple II computers in college 8)
But the programming textbook we used, still mentioned sending the punchcards (not really punchcards, but the ones you fill out with a pencil) to the computing center.

May well be down to shareholder pressure and I expect shareholders would not wish to have company IP or indeed customer data outside of their control. What is good for shareholders is good for business and good for votes needed by those in public office.

Does she also realize that ROT13 isn't sufficient "encryption" though?

HMRC (UK tax office) lost a CD with 15 million people's personal data on it. They released a statement saying it was password protected. A password protected MS Office document is not really "protected" in any meaningful sense.

Depends on what version of Office/Word. A document secured with a 32+ character password in a recent version (Office 2003 and newer) can use SHA 512 and AES-CBC.

Of course, using a weak password, all bets are off.

If one needed to distribute data on CD encrypted the "right" way, I'd either use a large PGP archive, ship the CD with a TC volume and a keyfile encrypted to the receiving site public keys, or use a commercial utility like PGP Disk and have a volume only openable with the receiving site keys.

encrypted or the credit card companies won't do business with you. (PCI compliant or something like that)

That leaves social security number and email address/password, but really, you should not use the same password for your Gmail account and Oily Pete's V1agra Online. As for social security, never give it out to anyone under any circumstances unless it's a bank (real one, not a Nigerian prince bank) and you're asking for a loan or opening a checking account.

They require that you "encrypt" the data, but they also typically require that you send the data unencrypted (albeit tunnelled over ssl) to actually process a payment, so while the data may be encrypted on disk the server typically also has the ability to decrypt it on demand in order to make use of it... So it's just a case of a hacker working out how, and then triggering the same process to extract the data.

While I agree with your points, I think the public is unfortunately pitifully trusting. This whole NSA spying stuff will pass through the news cycle and soon not be covered again. It's only making a big splash because Fox News likes making fun of the Obama administration, but before the public actually starts demanding their right to privacy, Fox News will bury the issue and convince their watchers that the government is not spying on them. All of the systems we have in common usage are total crap, and a

Usually at some point the server needs to be able decrypt the data so it can be displayed to a user, so the key needs to be handy. So if you have the key and data on the same sever it's of little security value.

If you want to have this data in some kind of database, there is a good chance you want to be able to search and index this data. Possible to index and pre-sort encrypted data without giving away the content?

Yes, maybe encrypt some sensitive parts, but encrypting all customer data is counter

So... explain how that helps when someone hacks into the server and requests data using the same mechanisms and level of authority as the server software (which must ultimately manipulate unencrypted data).

What you are looking for is homomorphic encryption. I don't offer that.

I offer a way to create accounts anonymously. And much easier than the email-address password combination.

When customers sign up for an account, they create a nickname. That gets signed into the client certifcate. The web server receives that nickname from the crypto-authentication libraries as the username. Do with that username what you want.

"As of 2007, the best attack which applies to all keys can break IDEA reduced to 6 rounds (the full IDEA cipher uses 8.5 rounds).[1] Note that a "break" is any attack which requires less than 2128 operations; the 6-round attack requires 264 known plaintexts and 2126.8 operations.Bruce Schneier thought highly of IDEA in 1996, writing, "In my opinion, it is the best and most secure block algorithm availa

What is your point exactly? 2^126 is still massively infeasible, and it only applies to a reduced round version. In fact, since a year or two ago, full-round AES is also subject to a 1-2 bit break. That means that IDEA is at least as secure as AES.

Using encryption is easy. Managing the encryption keys however, not so much. The number of developers I see posting questions (to StackOverflow) on encryption with NO IDEA on basic key management is very worrying.

That has nothing to do with the problem. We are already assuming that the companies have personal data, they just want to encrypt it to prevent third parties from obtaining it. The problem is that you need to decrypt the data at some point in order to make use of it, so the key must sometimes intersect with the data. Where do you keep it so that someone who gets the data won't also get the key?

I know you're trying to plug your thing here, but what you are saying is just naive. People use credit cards on the internet, you can't just magic that away with bitcoins or something. At least not yet. The technology isn't there. Do you suggest never using a credit card in real life? Or never telling anyone your name? At that point it is public information right?

The crazy is in thinking she can regulate better security onto any random industry. It doesn't work like that. Security is too complicated to magically fix by insisting on blind usage of a particular tool.

If you look at the article, a huge number of the breaches are to do with credit card leaks. Well, duh, credit cards are a pull model not a push model. Bitcoin is more sensible, but the California DFI is busy harassing Bitcoin companies. So if she really cares about upgraded security, maybe she should get t

Bitcoins are a dead issue they will never be a replacement for any kinda legal tender just for the fact its untraceable the government will not allow it.
And SOMETHING must be done because they will not do it without being forced to do it."Data centers not encrypting the data they have" Your kind will always find reasons to not do something because well its hard to do right.
With your kinda thinking we would have never gone to the moon. And your kinda thinking is keeping us from going to mars not because of

I've dealt with cleaning up some nasty data breaches over the years, I've had conversations with Attorney Generals when the breaches were bad enough. Companies fear Attorney Generals about as much as they fear being on the wrong end of the international news.

I've been involved with companies where data breaches happen where Attorney Generals while and while not get involved. The difference is night and day for things like encryption, notification of consumers, risk mitigation and other such steps. Pause and think about it for a moment, do you really think California is breached that much more often than other locations, or do people simply find out because the companies fear being on the wrong end of the Attorney Generals pointy stick?

Attorney Generals that give a damn are good things, they give the security professionals at the companies in their states the leverage they need to actually do the things that they want to do (encryption etc).

Tradition normally holds that a person who does a bad act is the guilty party. These days that is becoming rather twisted. If a person steals data then doesn't the guilt fall upon the thief? What they are doing is similar to the rather absurd gun law that can find a person negligent for simply using one lock to secure a gun. A home owner locks his windows and doors and drives off to the market. Mr. bad guy breaks in the back door and steals the gun and later that day shoots someone. Out of the

We need to make companies liable for any information they are so careless as to lose. Intruding on their business process is the wrong way to go about it: punitive liability judgements (and tighter disclosure laws) are the right way.

Part of the problem here is this horribly mistaken meme that everyone and everything is hackable. It makes people feel not responsible, and it's only true in the sense that evert newborn baby has started dying, or that the universe will cool/stop. Not concerned with this meme

For things like the electric grid, there shouldn't even be any access at all. It's that critical. It is critical enough that they should have private FIBER following every power line.

For people info like SSN and bank account numbers, the system should be revised so that the number alone only serves to IDENTIFY and is not treated as AUTHORIZATION. Lots of people have other people's SSNs for various reasons. Using the identification number for authorization is totally wrong. This also goes for credit car

... are essential to the servers that handle the data. They can't actually operate on the encrypted data. They have to UN-encrypted it first (and RE-encrypted it to put it back if there any changes). So what does this mean to me? It means I have to grab the encryption key(s) when I break in to get the data.

This reminds me of an incident with a state web site. Someone broke in and did some defacing. The state's top IT director answered a reporter's query with "This needs to be investigated because we b

How many mails have you received that were official and digitally signed (not a signature)?I work in a company where people are pretty security savy, but email somehow is an exception.. When I ask how they know the mail came from John Doe, they tell it is sure because the email address is John.Doe@example.com.When I ask them how person X knows that it came from our company, the answer is "Because the email address is info@example.com.". So while IT enjoys themselves to add useless disclaimers (I AM the int

How many mails have you received that were official and digitally signed (not a signature)?I work in a company where people are pretty security savy, but email somehow is an exception.. When I ask how they know the mail came from John Doe, they tell it is sure because the email address is John.Doe@example.com.

Quickest way around that: send out a few emails as the company CEO, and set the Reply-to address to a random colleague.

Loads of fun, and all you need is a command line on a server somewhere.