If they're being hoodwinked by phishing attacks and other plain-vanilla social engineering campaigns, what hope is there for the rest of us?

Individually, we can be vigilant with our own data. But the moment it goes out to someone else's server, all bets are off. (Even the National Security Agency couldn't keep their data secure.)

Moral of this story: trust no one, and don't release any data that you absolutely don't have to. (Unless you're a social-media attention whore, in which case go nuts.) Once it's out there it ain't ever gonna be safe. Ever.

If they're being hoodwinked by phishing attacks and other plain-vanilla social engineering campaigns, what hope is there for the rest of us?

Individually, we can be vigilant with our own data. But the moment it goes out to someone else's server, all bets are off. (Even the National Security Agency couldn't keep their data secure.)

Moral of this story: trust no one, and don't release any data that you absolutely don't have to. (Unless you're a social-media attention whore, in which case go nuts.) Once it's out there it ain't ever gonna be safe. Ever.

It would either be that, or finding security specialists who are actually interested in making programs more secure at an everyday functional level, considering human factors and real workflows, or doing anything other than than sniping at each other and building out My Favorite Toy Security Model.

Since security experts like that apparently don't exist, defeatism it is.

Back up channels? What would those be? Can someone elaborate? If you have another network, how do you know that network isn't compromised?

if you assume an arbitrarily sophisticated attacker then you're probably right, but realistically something as simple as having a phone number list should suffice to create an out of band channel outside the compromised network.

If they're being hoodwinked by phishing attacks and other plain-vanilla social engineering campaigns, what hope is there for the rest of us?

Individually, we can be vigilant with our own data. But the moment it goes out to someone else's server, all bets are off. (Even the National Security Agency couldn't keep their data secure.)

Moral of this story: trust no one, and don't release any data that you absolutely don't have to. (Unless you're a social-media attention whore, in which case go nuts.) Once it's out there it ain't ever gonna be safe. Ever.

It would either be that, or finding security specialists who are actually interested in making programs more secure at an everyday functional level, considering human factors and real workflows, or doing anything other than than sniping at each other and building out My Favorite Toy Security Model.

Since security experts like that apparently don't exist, defeatism it is.

They exist, and can tell you what true security practices look like, but what's needed for true security is not compatible with consumers or business user's computing practices.

Remote desktop session so user has no local access to data.

No access to internet from a session with access to data you are protecting. Your local client could for example have internet access for research purposes, and your remote desktop session would not.

No copy/paste or file sharing between local and remote desktop.

No printing

No e-mailing outside

All data encrypted, and accessible only to user's remote desktop logins, both in terms of who can decrypt and in terms of which IP traffic can and cannot reach the remote data.

And search everyone on the way out the building to make sure they haven't taken pictures of their screens, or written anything down.

Want to know why the above is necessary for true security? Because people, especially in large numbers, are inherent security risk and may do damage out of malice or ignorance. And because all software is insecure and will remain insecure for the foreseeable future. Those two factors make a pretty safe bet that anything that can be accessed will be accessed and copied.

If they're being hoodwinked by phishing attacks and other plain-vanilla social engineering campaigns, what hope is there for the rest of us?

Individually, we can be vigilant with our own data. But the moment it goes out to someone else's server, all bets are off. (Even the National Security Agency couldn't keep their data secure.)

Moral of this story: trust no one, and don't release any data that you absolutely don't have to. (Unless you're a social-media attention whore, in which case go nuts.) Once it's out there it ain't ever gonna be safe. Ever.

It would either be that, or finding security specialists who are actually interested in making programs more secure at an everyday functional level, considering human factors and real workflows, or doing anything other than than sniping at each other and building out My Favorite Toy Security Model.

Since security experts like that apparently don't exist, defeatism it is.

They exist, and can tell yu what true security practices look like, but what's need for true security is not compatible with consumers or business user's computing practices...

Emphasis mine. Right there. This is what's wrong with security, and it's the people like you who look at the combined total of human behavior, current security models, decide that every user on earth needs to change to fit their models, rather than vice versa.

...This is what's wrong with security, and it's the people like you who look at the combined total of human behavior, current security models, decide that every user on earth needs to change to fit their models, rather than vice versa.

Not every user, just the ones that want/desire true security. Every security system has a weak point. Humans have proven time and again that they are the weak point.

...This is what's wrong with security, and it's the people like you who look at the combined total of human behavior, current security models, decide that every user on earth needs to change to fit their models, rather than vice versa.

Not every user, just the ones that want/desire true security. Every security system has a weak point. Humans have proven time and again that they are the weak point.

No, the weak point is the sense of entitlement among security specialists that you are so wonderfully embodying. For any other software discipline, if you start telling people that the people paying you are the problem, you would get kicked to the curb-- and deservedly so.

In every other field, matters of user interface, user experience, clarity and ease of use are addressed and accepted as chief considerations of the programmer or software designer. In the security field, it is somehow acceptable to throw up your hands and insist upon a massive sink of man-hours on the part of every user is an acceptable solution.

No, the weak point is the sense of entitlement among security specialists that you are so wonderfully embodying. For any other software discipline, if you start telling people that the people paying you are the problem, you would get kicked to the curb-- and deservedly so.

In every other field, matters of user interface, user experience, clarity and ease of use are addressed and accepted as chief considerations of the programmer or software designer. In the security field, it is somehow acceptable to throw up your hands and insist upon a massive sink of man-hours on the part of every user is an acceptable solution.

Security isn't a system you implement, it is a process. You can design it into whatever software you like but it isn't used in a vacuum. Humans routinely do things that software designers don't expect or use multiple pieces of software together in such a fashion as to render the security features impotent. The only way to get TRUE security is to eliminate the variables and force humans to operate in a predictable way; in other words, to act like machines.

No, the weak point is the sense of entitlement among security specialists that you are so wonderfully embodying. For any other software discipline, if you start telling people that the people paying you are the problem, you would get kicked to the curb-- and deservedly so.

In every other field, matters of user interface, user experience, clarity and ease of use are addressed and accepted as chief considerations of the programmer or software designer. In the security field, it is somehow acceptable to throw up your hands and insist upon a massive sink of man-hours on the part of every user is an acceptable solution.

It's worth noting that user experience, ease of use, and convenience are diametrically opposed to security. Think how much easier it would be if you didn't have locks on your doors. Or how much worse your home experience would be if your windows were bulletproof and couldn't be opened. Security is a tradeoff with the very aspects you advocate. Yeah, a poorly designed system can have both improved at once. But even that can be incredibly costly.

As Brian pointed out, it's rarely worth a high level of security, because the tradeoffs are too high. Removing admin privileges and requiring applications to be whitelisted in order to run would increase security, but make it much harder to leverage a personal computer to better do one's job. So plenty of places understandably avoid doing it. The PKI infrastructure is a loathsome beast as it is, and we haven't even attempted to sign an encrypt all private E-Mail. Security is costly, and rarely worth taking all that far.

And if you pay attention in non-Internet areas, you'll notice almost no security. My house has incredibly breakable windows, that I can open, and leave that way. There is nothing preventing me from running over pedestrians with my car. My identity is rarely challenged, and never in a way that's terribly difficult to fake.

Ars readers by and large know better, even if we don't practice it. Most people at my company don't, though, and we are a large company (which thankfully has not been reported on on this website). Here, we don't have sensitive data on our own servers but plenty of it on customer servers which we frequently remote to.

But most people have widgets and macros to jump them past most of those password-protected layers. While folks in security at large companies (Microsoft, Ebay) know better doesn't mean your run-of-the-mill employee does, or even cares.

You have to care about being secure to be secure in today's world. I wonder if the costs to secure everything at this point might be too high for the unbeknownst user.

In a world where people easily move between companies and nations, I can see a situation where a someone within a large organization is party to an upcoming phishing email, and simply clicks on it when it arrives. I do not know how to protect against that.

Or maybe the NSA weakened encryptions are having unintended consequences.

Back up channels? What would those be? Can someone elaborate? If you have another network, how do you know that network isn't compromised?

if you assume an arbitrarily sophisticated attacker then you're probably right, but realistically something as simple as having a phone number list should suffice to create an out of band channel outside the compromised network.

Within limits. E.g. if your phone is VoIP on the company network, internal phone service could also be compromised. Cell is not inherently secure, but cellphones might be more secure if you are not being hacked by a government agency, since they are not on your network and hackers typically are not actually at your site.

No, the weak point is the sense of entitlement among security specialists that you are so wonderfully embodying. For any other software discipline, if you start telling people that the people paying you are the problem, you would get kicked to the curb-- and deservedly so.

In every other field, matters of user interface, user experience, clarity and ease of use are addressed and accepted as chief considerations of the programmer or software designer. In the security field, it is somehow acceptable to throw up your hands and insist upon a massive sink of man-hours on the part of every user is an acceptable solution.

It's worth noting that user experience, ease of use, and convenience are diametrically opposed to security. Think how much easier it would be if you didn't have locks on your doors. Or how much worse your home experience would be if your windows were bulletproof and couldn't be opened. Security is a tradeoff with the very aspects you advocate. Yeah, a poorly designed system can have both improved at once. But even that can be incredibly costly.

As Brian pointed out, it's rarely worth a high level of security, because the tradeoffs are too high. Removing admin privileges and requiring applications to be whitelisted in order to run would increase security, but make it much harder to leverage a personal computer to better do one's job. So plenty of places understandably avoid doing it. The PKI infrastructure is a loathsome beast as it is, and we haven't even attempted to sign an encrypt all private E-Mail. Security is costly, and rarely worth taking all that far.

And if you pay attention in non-Internet areas, you'll notice almost no security. My house has incredibly breakable windows, that I can open, and leave that way. There is nothing preventing me from running over pedestrians with my car. My identity is rarely challenged, and never in a way that's terribly difficult to fake.

And, if you pick the wrong balance of security and convenience, it's self-defeating. Users will actively create loopholes to simplify their lives.

For example, in apartment buildings where visitors have to be buzzed in, people may block the door from closing, or just routinely press the "open" button when the bell is rung, without checking.

And, if you pick the wrong balance of security and convenience, it's self-defeating. Users will actively create loopholes to simplify their lives.

For example, in apartment buildings where visitors have to be buzzed in, people may block the door from closing, or just routinely press the "open" button when the bell is rung, without checking.

Users can create loopholes when security is a security through obscurity. It benefits nether the IT people whose systems aren't really secure, nor does it benefit the users who are being inconvenienced by all the shoddy security measures.

And, if you pick the wrong balance of security and convenience, it's self-defeating. Users will actively create loopholes to simplify their lives.

For example, in apartment buildings where visitors have to be buzzed in, people may block the door from closing, or just routinely press the "open" button when the bell is rung, without checking.

Users can create loopholes when security is a security through obscurity. It benefits nether the IT people whose systems aren't really secure, nor does it benefit the users who are being inconvenienced by all the shoddy security measures.

Well no I was not talking about hacking. No user is going to exploit a buffer overflow to get to an excel file locked away by another department. I'm talking about the balance of security and usability. If security is done right, out of band AKA sneakernet is not possible. If users are bypassing security to make their lives easier/more convenient, then might as well give up on those security measures because they are ineffective.

Ease of use, and convenience need not be opposed to security. Our solution http://www.docTrackr.com gives users the power to track and secure documents right from email, without interrupting their workflow. We offer users a free extension so they can track and secure email attachments with ease.

Ease of use, and convenience need not be opposed to security. Our solution http://www.docTrackr.com gives users the power to track and secure documents right from email, without interrupting their workflow. We offer users a free extension so they can track and secure email attachments with ease.