Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

Trailrunner7 writes "Microsoft is changing the way in which it handles vulnerability disclosures, now moving to a model it calls coordinated vulnerability disclosure, in which the researcher and the vendor work together to verify a vulnerability and allow ample time for a patch. However, the new philosophy also recognizes that if there are attacks already happening, it may be necessary to release details of the flaw even before a patch is ready. The new CVD strategy relies on researchers to report vulnerabilities either directly to a vendor or to a trusted third party, such as a CERT-CC, who will then report it to the vendor. The finder and the vendor would then try to agree on a disclosure timeline and work from there." Here's Microsoft's announcement of the new strategy.

In response to the second step in the Coordinated Vulnerability Disclosure ("Step 2: Hurry Up and Wait"), I've printed several copies of the CVD on quadruple ply tissue paper and stocked all the restrooms with it. I've also prepared a special four course meal for Mr. Ormandy [slashdot.org] consisting of Taco Bell, a cup of coffee, a cigarette and a spoonful of castor oil.

Mr. Ormandy, I think you know what to do. I really found it amusing that they called the blog posting "Bringing Balance to the Force" when it looks to be completely defined by Microsoft with little or no input from the community.

I'm not saying it's the public's job to troubleshoot their shoddy code and develop fixes.

I'm just saying I feel it IS the public's responsibility not to make potentially dangerous information available to people with malicious intent.

I have no love for MS. I just feel everyone is better off with "Hey you morons, look at the latest exploit" instead of "Hey, general public including innumerable black hats, look at the latest exploit"

The quickest way to protect the public from malicious intent would be to get them to all stop using Microsoft products immediately. Everyone's sitting in a sinking lifeboat and you're quietly warning the captain about each leak you find so he can stick some chewing gum on it. What you really should be doing is screaming "Look at all the Fing holes in this boat!! Everyone get in that other, non-sinking boat called Linux over there!!!"

Yes, then the target would be the next biggest OS down the chain. The problem isn't "solved", it is just moved. Much like how surveillance cameras don't really cut down the crime rates, they just move them to a different area. If Linux had more of a presence it would be as big of a target as Microsoft. MS is just the current "low hanging fruit". Sorry, but the solution to security problems should never be "switch your operating system and every piece of software you currently use".

Switching the majority OS to GNU/Linux would have one immediate and obvious benefit: the source is widely available and widely modifiable. If we find a vulnerability, it can be diagnosed and patched immediately, without having to wait for a corporation's blessing. Hell, you don't even have to wait for the kernel team's blessing, or any other governing entity. Just post the patch and tell people about it!

It used to be clear that *nix systems were more secure, because they were actual multi-user systems. Nowadays, it's less clear. I'm certain a properly set up SELinux system is still miles more secure than Windows 7, but it's unlikely a common user will have that. However, even if there is no security advantage, I know this: Linux may not be more secure, but it is certainly easier to keep secure.

first off the majority of people wouldn't be able to immediately diagnose and patch because they have no idea how to do that.

Yes, but this does not negate the fact that there are many more eyes looking for flaws. A minority of a ton of people can still be a ton of people. The fact that anybody could diagnose and patch immediately is the important part.

second because linux is open source you would be less secure because it is easier to find flaws and backdoors in a system that you can view its source code.

Yes, and not all of those who find these flaws would exploit them. Many would fix them. Also, as pointed out many times on Slashdot, security through obscurity is not security at all.

and since linux uses a general public License if they request to see your source you have to give it to them because it requires that derivative works also fall under GNU's general public license.

Except for that fact that Linux is simply more secure than windows by several orders of magnitude. The fact that you can setup a windows based machine without a login and still have full admin rights is proof enough of the serious and deep rooted conceptual problems with it's design. Windows is built, from the ground up, to sell windows. Nothing more.

If you are relying on your operating system for security, you are taking the wrong approach to security. All major OS have had exploitable flaws. Security is not software nor anything you can buy or install - it is a set of policies, procedures,and practices. The actual software involved is irrelevant.

I never claimed the two would be the same security-wise, I just said if Linux was the top market share OS, it would be the biggest target. How well it would fare compared to Windows is something I was not speculating on, or blindly assuming.

Linux machines are often the servers that have everyone's credit card numbers, trade/military/government secrets, massive processing power and commercial-grade Internet connections, VoIP servers, and all the other real goodies. Each Linux machine is a potential Fort Knox in a world of 7/11s.

And even though these are the minority these days with most Linux machines being home PCs and geek tinker toys, if any Linux machine is accessible from the Internet on port 22 it will be hit with ssh brute force attempts

The exact same thing can be said about Windows servers. Any properly configured box stands a much better chance of fending off brute force attacks. On my old Windows server that was running ssh I would log several thousand brute force attempts daily, with nobody every successfully breaking in. What does that tell you? Basically that I know how to configure a server, and that I used a really long pass phrase instead of a simple password. Security 101 stuff.

I'm not saying it's the public's job to troubleshoot their shoddy code and develop fixes.

I'm just saying I feel it IS the public's responsibility not to make potentially dangerous information available to people with malicious intent.

I have no love for MS. I just feel everyone is better off with "Hey you morons, look at the latest exploit" instead of "Hey, general public including innumerable black hats, look at the latest exploit"

That does kind of depend quite heavily on the researcher being the first to find the vulnerability, and the vendor allocating enough people to adequately deal with fixing it in a timely manner.

Can you say with any real supportable evidence that either statement is a safe assumption? Because I know I can't. And to be honest, I doubt any researcher worth their title can either. Including the guy who I imagine kicked this new policy off by disclosing one he discovered when Microsoft were palming him off with v

How does giving a company 5 days to fix an exploit right? If anything this looks like an effort by MS to get the researchers to agree to work with MS so that the details aren't released before a patch is ready. What possible reason is there for releasing this stuff anyway? Does it make anyone safer? Unlikely. Most people don't care enough about security in the first place. All the early release of the exploit does is give lazy hackers more ammunition. Cause let's face it even if MS fixed these within 24 hou

What possible reason is there for releasing this stuff anyway? Does it make anyone safer? Unlikely.

On the contrary, the answer is "possibly". If I know the nature of a security hole in Program X, I might be able to find a way to substitute, sandbox or discontinue Program X in my own workflow and thereby become safer.

What is the researcher's motivation to spend the extra time working with Microsoft? They certainly have no obligation to do anything Microsoft asks...

Personally, I prefer the Google and Mozilla method whereby researchers are paid a bounty of a few thousand dollars for reporting vulnerabilities in the manner the vendor prefers. Microsoft would be wise to follow the leaders rather than invent their own convoluted process.

Personally, I prefer the Google and Mozilla method whereby researchers are paid a bounty of a few thousand dollars for reporting vulnerabilities in the manner the vendor prefers. Microsoft would be wise to follow the leaders rather than invent their own convoluted process.

There's a fundamental problem with your comparisons. When a security bug is released in Firefox you see the Mozilla Foundation marvel at the cleverness of the attack. Then a distributed net of individuals quickly work together in an agile way to get the hotfix out and then sometime is spent testing and hardening that fix. When a security bug is released targeting Chrome or any of Google's products, you see Google developers that are comfortable on their campuses swing long hours and work together to push out a fix as quickly as possible. These are all sensible approaches to security bugs.

With Microsoft, however, you see the heavy thudding of a big corporation. You see a complex inner working of management slow things down. Somebody might ask for an estimate on how much money this is going to cost and that estimate comes back a week later. Senior management starts shredding documents. Engineers start falling from helicopters in Redmond. A tornado of chairs leaves several injured. Microsoft's campus looks like the superdome following Katrina. People are chained to their desks. The reason they ask for 60 days is because that's how long it takes FEMA aid to reach Microsoft...

IOW: MS is too big to turn on a dime.MS has become what they were striving to replace: IBM.

More like they can't. A problem may be a simple fix inside a problem module, but it's also got to go through rounds of testing to make sure that simple fix actually doesn't break anything. After all, even doing stuff like implementing LUA showed how badly things broke (see Vista).

The problem when you're the giant is you attract all the developers. The problem is, most developers write crap for code, and do things they

With Microsoft, however, you see the heavy thudding of a big corporation. You see a complex inner working of management slow things down. Somebody might ask for an estimate on how much money this is going to cost and that estimate comes back a week later. Senior management starts shredding documents

Honestly? Really? You don't think they have high/critical priority bugs, which get instant visibility right up the escalation tree, managers pushing the rest of the people to get a fix quickly? I've worked for some "big corporations", and when the shit hits the fan, the pressure from above increases immensely. Everyone mucks in, works long hours, gets stuff done.

Big companies can sometimes take a long time to change direction, or "get it" - but when it's something as fundamental as a very large security hol

Bug finders are both producers and consumers of the entire actions and consequences in the process. Finding and reporting security bugs is a civic action (as opposed to a communal one). Having the bargain for this action be based on economics instead of social rewards and punishments may have an adverse affect. So, it may be possible that people who get paid for reporting the bugs may feel that they have

I've never discovered a vulnerability in Windows or anything else, but if I did I'd be fine to sit it for as long as needed, as long as Microsoft got back to me and said "Yeah, we're working on it, here's when you can expect a fix." What's maddening (and actually Microsoft seems to be good about this, it's Apple and Oracle that are the worst offenders) is when someone sends a bug report into a black hole, never hearing anything from the company for months and months. At that point, I see no reason why the r

You found a vulnerability.
You know your bank, your hospital, your tax center has it.You know that there is an option to deactivate as a workaround.
You know that many people are actively searcging for this kind of vulnerabilities and it may be exploited right now.
And you see Microsoft claiming their product is the best and the most secure everywhere.

You can wait, yes, but I am unsure of the more responsible way of acting.

Posting anonymously for obvious reasons.
What happens today if one emails Apple's product security team (product-security@apple.com)? A few things.
First, you get a generic pre-generated email that acknowledges that Apple received your email. Next, if you're lucky, you get an email from an analyst who has reviewed your vulnerability. What happens next?
1) No updates are provided. Ever.
2) If you ask for an update as to when the vulnerability will be fixed, you will not get a detailed response.
3) Apple waits several months.
4) Apple waits several months.
5) Apple fixes the bug, possibly.
6) You get an email from Apple asking how you want to be credited.
7) If you're lucky, Apple will send you an email with notification on when they're planning to fix the issue, along with the exact wording of the specific advisory.
8) If you're lucky, Apple will fix the advisory in the week they say they will.
9) Normally, the date will slip a few weeks. Or maybe a month.
I applaud Microsoft for doing this. Hopefully Apple will follow suit and move out from the stone ages.

Apple is an insular and paranoid company. They are built upon the myth that the Mac/iPhone/iPad/iPod platform is "safe". They are selling an image: of computing platforms that are safe and secure for the end-user. Reality does not agree with Apple.

Most responsible researchers will play Apple's game, and part of their game is sending out inaccurate and vague responses as to when they may (or may not) fix what vulnerabilities have been found. I think it's helpful for people to

If I happened to run across a vulnerability tomorrow I might be inclined and would likely publish it that very day. Microsoft assumes I care for the well being of them and their customers when really I don't. I know this is aimed more at security researchers but then again they may very well feel the same way.

Here's a radical idea: How's about they don't release code tons of fresh code every cycle, and instead maybe check the code over first for buffer overflows, NULL pointer abuse, heap munging, and all the other obvious ways of executing code?

CSS: find a bug, see a lawyer, contact a CERT, wait several weeks for a response, sign an NDA, share vulnerability informations, wait 2 months, ask for status, wait for an answer for 4 more months, realize that the vendor will do squat about the vulnerability as long as his customers don't know how threatened they are, release the infos to the public to put pressure on the vendor, be threatened by the vendors lawyers, be called a criminal by the vendors customers and the press and politics, have a house-search, wait 2 more months, get patch, realize that it doesn't fix the problem, rinse and repeat

I am very curious how Microsoft defines "ample time" especially considering some of their vulnerabilities (like the one recently "patched" in the DOS subsystem) have existed for years or decades.

This isn't a slam at Microsoft, it's a hope that someone has some clarification that can be used as a context to determine if this statement means anything. Even when the terms of their statements are less ambiguous, they seem to find ways of backpedalling - thus greater clarity on something so very ambiguous is w

LoL, someone who doesn't know much about computers got mod points. One can choose not to like the truth, but, as even Microsoft themselves admitted, this is NOT a change in policy - it's a change in NAME only.

"[CVD] is the same thing as responsible disclosure, just renamed," repeated Reavey. "When folks use charged words, a lot of the focus then is on the disclosure, and not on the problem at hand, which is to make sure customers are protected, and that attacks are not amplified."