Posted
by
samzenpuson Thursday June 12, 2008 @07:57AM
from the did-you-update-the-windows dept.

Dr. Jim Anderson writes "The good folks over at Verizon Business have released a report that summarizes what they've found after looking through 500 forensic investigations involving 230 million records, and analyzes hundreds of corporate breaches including three of the five largest ones ever reported. What did they find? How about (1) Nearly nine in 10 corporate data breaches could have been prevented had reasonable security measures been in place, (2) Fewer than 25 percent of attacks took advantage of a known or unknown vulnerability and (3) attacks from Asia, particularly in China and Vietnam, often involve application exploits leading to data compromise, while defacements frequently originate from the Middle East."

I assume they mean "software/hardware vulnerability", and that the other 75% are people doing stupid things - "human vulnerabilities" or even "policy vulnerabilities". It's interesting in itself though that 75% of the attacks are due to, presumably, direct human error and nothing to do with the data being on computer.

So when you're bank next releases your details, don't accept an explanation. Most probably, someone who works there did something incredibly stupid and deliberate, rather than they got hacked or outwitted.

Apparently, someone is trying to make Rumsfeld out to be an idiot. Though that he may be, IMO this quote is actually fairly insightful, if somewhat poorly worded. I've had a similar saying (is it a saying if I'm the only one saying it?): "There are three types of people in the world. Those who don't know what they're doing and know they don't; those who know what they're doing and know they do; and those who don't know what they're doing but think they do. It's the last group that screws everything up for the other two groups." The thing to realise is that everyone falls into all three categories for different aspects of our lives, and the challenge is to tell the difference for each situation to try to avoid being in the last group.

In Rumsfeld's quote, "known knowns" are the areas where we are in the middle group: knowing what we're doing, and knowing that. "Known unknowns" are the areas where we don't know what we're doing and know we don't. And "unknown unknowns" are the last group: things we think we know, but don't. (Ok, that's not quite precisely what he's talking about, but it's analogous.) And that last group is the most dangerous one.

I've had a similar saying (is it a saying if I'm the only one saying it?): "There are three types of people in the world. Those who don't know what they're doing and know they don't; those who know what they're doing and know they do; and those who don't know what they're doing but think they do. It's the last group that screws everything up for the other two groups."

*Only* 75%? I'd have guessed it would be a much higher percentage. You would not believe how many times I have encountered such things, even from people who really should have known better. (Of course, this is/. -- most everyone here has probably experienced this, too).

lack of security (open systems / trivial, or written down passwords) doesn't immediately mean a problem with the software.Equally possible (if not more likely) for the problem to be with the user(s) use of the software

In addition to the training, you need to make breaches of security a terminable offense, for everything from a deliberate theft of information, to writing down a password on a sticky note and putting it on your monitor. Without teeth, you cannot enforce a security policy, and a policy that isn't enforced isn't a policy.

No, that means that there were patches available but they were never applied...

To me, that sounds like a known vulnerability. I think one of the posts above is probably a better answer to the question "what makes up the other 75%, if not a known or unknown vulnerability":

Username: adminPassword: password

Leaving the system in a default state isn't a flaw in the software so it isn't a software vulnerability. It's a lazy/sloppy sys admin. Unfortunately, this leads to playing semantic games -- "what exactly is a vulnerability?"

Not to mention the fact that CxOs are frequently the biggest offenders when it comes to poor security practices. I've seen more than one CEO of a Fortune 500 company use the name of the company as their domain/email password, and refuse to change it on a regular basis like the rest of the users at the company. Trying to enforce a security policy with someone who can have you escorted off the premises on a moment's notice is pretty much impossible.

The only way it works is to get the CEO/Chairman/Lord High Muckety-Muck to sign off on a policy that applies to EVERYONE, and then firing an executive for breach of policy as a demonstration of how serious the company takes security. (This assumes that a CxO breaches policy at some point, which is pretty much inevitable.) The attitude of "security policy is for little people" reminds me of Leona Helmsley's 'taxes are for little people' attitude.

But often I wonder how many companies connect everybody in the company to the internet when there is no real need? One place I worked maintained three separate networks; one for internet, one for work, one for very confidential work. The work network had access to e-mail (internet-based e-mail through a firewall through which only the mail-server could talk) while the confidential network had only internal e-mail. This may have been overkill, but breaches were more or less impossible. Running NT4 also made sure USB sticks weren't an issue, though I believe they managed to upgrade to XP a few years ago, but testing was extensive.

Somehow doesn't always work. I can't explain it, but I do KNOW that it can be circumvented:

Some time back I was a consultant at a (largish) bank. They too had 'locked out' USB devices that way. And hold & behold, it worked on any randomly available USB-stick, no external drives were mounted.

Some days later I was 'confused' and tried to copy something using my (very) old 64Mb stick. Worked like a charm. Realizing that this was 'impossible', we tried with other USB sticks, but mine was the only one that

Now, that's reasonable security measures you're talking about. The study found that most places that got breached didn't do any of that.Also, working without Internet access can be a real pain. It obviously depends on what you are doing, but many things grind to a halt when there is no web access.

Fortunately, there is WWW over SMTP. And seakernet. And ad-hoc networks.

I guess if you try to lock down the place too much, you'll have a plethora of access vectors beyond your control in no time.

Schools, for instance, generally run a "curriculum" and an "admin" network - one for the kids, one for the staff. Joining both is seen as an extremely bad thing. But there's usually absolutely nothing stopping people from connecting to random websites from the admin (even in the finance offices etc.).

Bring back the old days of text menus:

1. Pay in2. Pay out3. Print

Reduce the interface, reduce the capabilities, reduce the vulnerabilities.

Well yes, but there's also an important reason for the -1 mod: the GP has no factual basis for laying the blame on Israel.In fact, I've seen far more attacks coming from Pakistan, Egypt and Yemen (?!) than Israel. But yes, people are racially biased... whether it's pro-racism or anti-racism, very few people have the discipline to be right down the middle.

I think what a lot of people neglect to do is to filter access by country. If you're operating a U.S. bank, why in the world would you want Vietnamese a

If you're operating a U.S. bank, why in the world would you want Vietnamese and Chinese IPs visiting your site or hammering your firewall ?

As a U.S. bank are you really going to tell your customers, "By the way, if you ever need to access your account while on vacation outside the country, you're out of luck?"

Web access isn't spared, either. If you don't offer services outside your country, I strongly suggest serving up a different, nerfed site to those people - something with no sign-up forms or dynamic content of any kind.

Most of your customers assume that World Wide Web means just that: world wide. If I were a business owner, I'd certainly think twice before potentially driving away customers by telling them, in essence, "I can't trust you because you're not from the same country I am."

As a U.S. bank are you really going to tell your customers, "By the way, if you ever need to access your account while on vacation outside the country, you're out of luck?"

The full text from the grandparent post:

If you're operating a U.S. bank, why in the world would you want Vietnamese and Chinese IPs visiting your site or hammering your firewall ? Do you have an admin over there, SSHing in ?

If you are a bank, do you have your users signing in via SSH???

No, you probably don't want to block access to HTTPS (you ARE using HTTPS, right?) or SMTP from Vietnam or China (I would add Korea to this list based on the SSH and spam mails I've seen from Korean networks), and yes, I am aware that this implies that it would be possible to brute force your customers' passwords if you don't do something sensible like lock out their accounts after x invalid password

I was speaking to the following quote (perhaps I should have been more clear in my original post):

Web access isn't spared, either. If you don't offer services outside your country, I strongly suggest serving up a different, nerfed site to those people - something with no sign-up forms or dynamic content of any kind.

If your customers are overseas and they get the nerfed version of your site that doesn't allow for logins on any sort of interaction they'll certainly take their business to someone who does allow that sort of thing.

Shall I tag this 'badsummary', or do we have an 'oxymoron' tag we can use?

"...have released a report that summarizes what they've found after looking through 500 forensic investigations involving 230 million records, and analyzes hundreds of corporate breaches including three of the five largest ones ever reported. What did they find? How about (1) Nearly nine in 10 corporate data breaches could have been prevented had reasonable security measures been in place,

Why doesn't it go over the names of the companies that were breached? I've had my identity stolen but I don't know where they got my information, as I'm kind of A-R about my SSN, and such. (Thank God the ID Thieves were incredibly stupid, and only opened a home telephone account - which means they could be found because of the address for the service . ..)

But I've also had other account information stolen, and I knew where it came from. I use a different email address for EVERY website I give any inf

I typically put the name of the company in as a part of the email address when I use my email address on the web. This way, I always know who the sellouts are -- as well as those with poor security. And it's always surprising who turns out to either be a sellout or barn door. You can never be sure which. I signed up for the Netscape Developer program a long time ago (remember Netscape?) and today I still get SPAM sent to "fred_netscape@..."

Legally speaking, what is "reasonable security?" FTC fined TJX for not having it, but I disagree [blogspot.com]. Verizon says 9 of 10 data breaches could have been avoided if reasonable security were present. That implies 9 in 10 breach victims were in violation of law. The study's outlook is that the solution to identity theft is locking down corporate data. But a security consultant/solution provider like this Verizon unit naturally sets a high bar for what is reasonable. And when Verizon evaluates whether reasona

Though it wasn't our intention, it seems the reference to the % of attacks exploiting vulnerabilities has caused some confusion. It's true that 'vulnerability' can have a very broad definition (synonym for 'weakness') but we are referring specifically here to specific named/numbered (has a CVE or MS #) software vulnerabilities. The bulk of attacks across our caseload did not exploit such vulnerabilities - they exploited misconfigurations, omissions, poor security, etc. Hope that helps clear things up a bit.