Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

krebsonsecurity writes "January promises to be a busy month for Web server and database administrators alike: A security research firm in Russia says it plans to release information about a slew of previously undocumented vulnerabilities in several widely-used commercial software products, including MySQL, Tivoli, IBM DB2, Sun Directory, and a host of others, writes krebsonsecurity.com. From the blog: 'After working with the vendors long enough, we've come to conclusion that, to put it simply, it is a waste of time. Now, we do not contact with vendors and do not support so-called "responsible disclosure" policy,' Legerov said."

What does one have to do with the other? Proper sanitization of inbound data is basic security. HTML conformance is important to, but failing to conform isn't going to result in data theft, loss, or corruption on the servers.

Yes, I assumed this was an article about a firm dropping support for a database and webserver without any notice (perhaps a DRM-supplying company or something). Just below this headline is another misleading one, "CES Vendors Kicked Out of Hotels For Showcasing Wares in Room", which suggests they were showing pirated software.

At issue is the pesky ethical and practical question of whether airing a software vendor’s dirty laundry (the unpatched security flaws that they know about but haven’t fixed yet) forces the affected vendor to fix the problem faster than it would have had the problem remained a relative secret

Hasn't this been proven to be true - and legal?

In all honesty, if they've contacted the vendor and the vendor hasn't patched it in a month or two, I think its completely ethical and practical to release the vulnerabilities. After all, there could be a few other small firms who have discovered the vulnerability and are exploiting it. Best to put them out there in a Twitter feed so that the entire world instantly complains about it forcing the vendor to fix it. I prefer security over new features.

I agree, but that's not what this guy is doing. He's saying that he doesn't want to notify vendors at all, which I feel isn't responsible. I believe that you should notify the vendor and then release it in a reasonable time frame (TFA suggests 60-90 days).

I don't have a problem with the disclosure of vulnerabilities once the vendor has been notified, because I think it does cause the problems to be resolved quicker. However, not telling the vendor means there's no chance for them to even start on a fix bef

He's a step ahead of you. He's tried doing it the right way and gotten no results. So he's going to skip the part where he wastes his time.

If companies want responsible disclosure, they should respond in some way to the disclosure. Maybe companies will actually fix bugs instead of sitting on them, and he can go back to doing it the right way. He also warned the companies he's going to do it, so they have a chance to fix things before then.

Here's a tip for you. In the real world, sometimes you have to force the other party's hand to get them to act responsibly. He's to that point, and fortunately has leverage. By making this choice public, he shames the irresponsible software companies which allow security problems to sit around unfixed.

Hopefully they'll scramble to release some fixes, which they haven't done yet, which is a net improvement over the current situation where millions of people have unpatched vulnerabilities.

In short, I don't see a problem here. I use software, it has security problems, I expect those to be fixed. Whatever it takes to get there, I'm all for it.

... that's not what this guy is doing. He's saying that he doesn't want to notify vendors at all, which I feel isn't responsible.

Well, how I read it is more like "Hey, we've tried notifying these turkeys a dozen times or more, and every time, they stonewalled us. I'm fed up with them, and I'm not going to waste my time any more. I'm just going right to the public release, which their history shows is the only way to get any action."

Maybe this isn't the "responsible" thing to do, but it's certainly underst

I think that it would be much better to always notify the vendor (telling them when you will release) and then release as scheduled no matter what the vendor does or says. The word would soon get around and vendors would know they were working against a firm deadline.

Perhaps what we should suggest is starting off with a nice long "advanced notice" period with a vendor, 2 or 3 months. Each time they fail to act within that window, you decrease it slightly for the next bug you report. With time, this might stabilize on a reliable period for that vendor. Of course, this only works if you have a long-term business relationship with that vendor. In many cases, people are likely to give up long before the asymptote is reached.

Pro Full Disclosure: "99% chance that the evil hackers already know about the exploits when a whitehat finds it, plus vendors don't get their lazy bums up unless there's danger in the air and the customers demand it."

Pro "Responsible Disclosure": "Mimimi, that's sooo evil. Plus vendors will certainly fix things ASAP and work with researchers and everything will be better and I'm not being paid t

I agree, but that's not what this guy is doing. He's saying that he doesn't want to notify vendors at all, which I feel isn't responsible. I believe that you should notify the vendor and then release it in a reasonable time frame (TFA suggests 60-90 days).

Well, you could always apply for that job:}

You get paid nothing, to email vendors about their security flaws, and wait for a reply that will never be sent to you.

Oh, and you aren't allowed to 'quit' this job, else we will say on the internet that you are immoral unethical and not reasonable.

Especially after you do this for years, get not a single reply, and realize just how futile the whole process is. Definitely can not quit after that!

What he's saying is that notifying the vendor first doesn't result in a fix at all, so why waste breath and allow the vulnerability to remain in the wild longer?

If it's releasing them into the wild results in a faster fix, then that's what should be done. There's no such thing as security through obscurity. Whether it actually results in more damage to release it immediately without notifying the vendor than to notify the vendor and have them do nothing for six months - while during those six months, othe

This is one of those issues where the instinct of any good capitalist is to privatize benefit and socialize risk. When you screw up in the auto industry, the company faces the massive expense of a product recall. That helps to keep you honest with your engineering quality.

I personally think 30 days is a reasonable notification period. Not pleasant for the vendor to have to respond that briskly, but this isn't about being pleasant. If the vendor wants pleasant, they should invest more competence in the original product. This isn't easy, and might move a few pointy-haired managers out of the executive suite.

Probably a more viable compromise is eight weeks. This adds a thin margin for the possibility that key zero-day SWAT staff are booked off, that multiple issues are raised concurrently, or that a product has a stupendously long build cycle.

I would be thrilled to see an industry standard put in place where everyone knows the ethical notice period is eight weeks, period, perhaps with the odd extension on a track record of good behaviour.

I would also like to see proprietary TCO calculations updated with a term to account for the customer disruption of having to rapidly deploy a not-tested-for-months-at-a-time critical vulnerability patch.

Speaking of which, that whole TCO thing really bends my biscuits. It's just loaded with sly neglect of not entirely apparent costs, of which the year-long critical vulnerability update is one of the more egregious.

During that time, your pants are down if anyone less ethical discovers the same flaw. It never happens that two scientists make the same discovery in the same year and end up in priority dispute, according to the industry of socialized risk.

First day: notify the software company and enter info in the database.-- info should include specifics, name of the program, an estimate of severity, and any info which can be released without actually revealing enough of the nature of the bug to continue.-- The web site should handle allowing access to the specifics after the specified time.-- The software vendor should be able to enter comments-- The software vendor should be able to request extensions t

To clarify the summary, this guy isn't saying that he's not going to wait for companies to fix exploits before he releases them; he's saying he's not going to tell the companies at all. That, in my opinion, is very irresponsible. If you contact them and say you're going to release the information in 90 days regardless of their progress on a patch, fine, but to not warn them because of a few vendors who don't do their job is harmful to everyone.

Of course, if the big companies that are effected felt it made business sense to do so, the fact that this group is located in Russia could make them easier to deal with. A bit of Microsoft cash slipped into the right unregistered bank account... problem solved, guys are shut up permanently.

It occurs to me that financing international terrorism is a bit of a step up from not fixing exploits in your software. If adobe was known to finance murder in a foreign country, just what do you think would happen?

Problem is that if you warn a vendor privately, they will either dismiss you outright

Then you proceed with disclosure.

or get a court to sign a gag order against you in a matter of hours.

Has there been a precedent for that?

I have reported security vulnerabilities in the past, and while the fix did take longer than I expected to be reasonable, at all points I was kept notified of the current progress, and I was never "dismissed", nor did anyone threaten me with court gag orders or anything like that. What did I do wrong?

The devil you don't know is less dangerous than the devil you know? Fact is, the guy says he's got holes from Real from two years ago that haven't been patched. Two years isn't enough time, now you want two years and three months?

What he seems to be saying, is that he's already told the companies and they've done nothing. A better term for it might be "effective disclosure" in order to differentiate itself from the, proven ineffective, "responsible disclosure" advocated by the industry.

They could be providing auditing services. Advertising to whole IT world, that they found shitload of them might just say "Hey, we can check if your apps are safe, and perhaps recommend something better if they aren't."

Or is the English language dying a painful death on/. as time passes. The past day's article summaries and headlines are a blend between Yoda backing off the chronic and the broken English that some toy assembly manuals convey.

Seriously, it took me three passes at reading this article headline to understand what the hell it meant. Maybe that's part of the entertainment value that I'm missing???

It's a high concentration of words and/or phrases having overloaded meanings. As technology develops, normal words acquire additional connotations, if not denotations. Since this is a tech-oriented news aggregator, you should select the tech connotation first, then re-parse with non-tech meanings if that fails.

'Drop' in this case can be parsed in the sense of 'vendor drop', meaning 'deliver' or 'drop a bombshell'. Not typical usage, but not uncommon. 0-days obviously refers to vulnerabilities, and confl

I welcome this.In ancient ages past, we put up with "It's a theoretical attack, no one could actually execute it"...to "group X has released a THEORETICAL working example of an attack to the public, so we fix it six months after revealing it to us"...to "Here is how you fail... here is how to make you fail... FAIL!!!"

'responsible disclosure' is just wearing the nice guy badge...

You're the only one wearing the nice guy badge.

I'd rather see "Oh CRAP! This thing in Word is broken!" "Oh CRAP! This thing in Excell is broken!" "Oh CRAP! I went to look at a brittany spears vid and now can't move my mouse! Why is my DSL light blinking a lot?"And then see it fixed in a day or two (at most), rather than a month or two (if we're lucky).

It seems only slightly less irresponsible to publicly disclose exploits without making companies aware of them than it is for companies to disregard known security flaws in their own products.

RFPolicy struck me as the best compromise, but maybe there's room for a third-party service to hold exploit information in escrow for a defined period of time then release it. If a company knew that they had a couple of months to fix a problem at the outset, and that nothing was going to stop publication, that could p

While I don't blame them for releasing two year old vulnerabilities, they're going too far by not giving firms ANY TIME to fix vulnerabilities. Give them six months and then release them, but give them time. This does as great a disservice to users as those firms do by not fixing the vulnerabilities.

If more firms paid bounties for bugs found (as long as responsible disclosure is followed), you'd probably see a whole lot more security researchers content to follow responsible disclosure guidelines. There's no guarantee that they'll keep that all a secret in any case, but to get the cash, you've got to sign a legal form with your company's information or be registered as a valid security analysis firm. One of the biggest issues with these security analysis firms is that there's no way to tell most of the time if it's just a bunch of criminals hiding out under a corporate umbrella, or if they're bonafide security professionals. And no jokes about them being one and the same...there's a huge difference, I've known (and in the case of those pros, I've worked with them) guys from both sides. If a security firm refuses to be registered or refuses bounties, you know there's something fishy about them and it's time to contact local authorities.

Then again, there's the big problem with many of the bugs that outside security firms reporting being already known and in a work backlog. The realities of the industry is that capital isn't unlimited, time isn't unlimited, and sometimes, important stuff doesn't get done because you just don't have enough qualified developers to throw at the problem. Two years is fairly excessive for a security hole to sit around, but if a security firm is releasing exploits that it discovered and reported 6 months prior just because it "didn't see enough getting done", that's not being passionate about security, that's an attempt to commit extortion.

My eyes started to glaze over but the ecosystem seems to go like this. Researcher discovers vulnerability, sells it to companies that buy that kind of info, then reports it to the company that made the flawed software.

One assumes that all the big anti-virus vendors buy the info from the vulnerability clearinghouse thus giving their users some measure of 0-day protection. Eventually the flawed software should be patched and all is well.

The third option: "Dear developers of [insert product name], I've found an security issue in [insert product name]. Details are attached. I give you 14 days before releasing this information publicly."

Exactly. The GP is seeing the world in black-and-white, where reality has many gradations in between.

Naive responsible disclosure: give it to the vendors. They do nothing. The bad guys figure it out. Everyone loses.Irresponsible disclosure: hand out a zero-day to the bad guys. Everyone loses.Effective responsible disclosure: disclose it to the vendors along with the promise to disclose it publicly on a scheduled date.

It should be noted that the third way is how CERT does things, and is the only way that the end users stand a chance of not getting screwed. It is important to make it clear that the vulnerability will be released to the public on that date no matter what. It is also important to make this date no more than two months in the future. Make the time frame too short and you're accused of creating a zero-day exploit. Make it too long and they won't bother looking at it until a week before, then they'll tell you that they can't fix it in time, and they'll accuse you of creating a zero-day exploit. There's a middle range in which it's close enough to scare the pants off of the manager types but far enough out that the fix can actually happen.

Most importantly, though, if the vendor doesn't fix it, you must disclose it anyway. Otherwise you lose all credibility, and vendors will simply put off fixing the problem because they'll assume that you will keep backing down.

Basically what this is about is choice. The companies in question have been notified of the security flaws in their product. They have as of yet fixed said flaws. They have instead prioritized other projects above fixing the bugs. The choice was given to the companies in question. The choice is now being removed due to their inaction.

I will take irresponsible disclosure any day over people not fixing known bugs. This is forcing their hand and that is why they don't like it.

All in all, tough shit for the companies involved.

In an ideal world security flaws would be fixed when they are discovered. I think we can all agree this is not an ideal world.

Except he did not contact the vendors. He said in the past he has contacted some and they didn't fix it, so now he has given up on all vendors and does not disclose the information at all for any vendors.

I work for one of the affected projects and can tell you that we did not get contacted by them via any of our normal, well publicized methods (email, phone calls, etc...).

I agree that if a vendor does not reply then it is totally okay to disclose it to force their hand. However, disclosing it immedi

That's really not fair either.Many bugs that are security related are a result of interactions that people simply didn't think of as possible. While bug free code is desirable, and possible, would you be willing to pay 10 times more for a "provable" product? 100 times more?

Look at the space shuttle code. Provable software with an average of something like 2 man years per line of code on average? Is that realistic for consumer or even pro commercial software?

On the flip side I abhor this type of disclosure as well. I think 0 days should be forwarded to the vendor and given at least 90 days before release. Hell set a timer on it, even say the following timeline would be ok(ish):discover exploit: notify vendornotification + 1 week: notify world of nonspecific vuln in productnotification + 1 month: notify world of type of vulnerabilitynotification + 2 months: notify world of specific vulnnotification + 3 months: notify world with exploit code.-nB

Legerov said. For example, he said, “there will be published two years old Realplayer vulnerability soon, which we handled in a responsible way [and] contacted with a vendor.”

I think that apparently the vendors aren't doing a damn thing to patch a good amount of these reported vulnerabilities if they are being reported in a proactive manner. Seems as if once the exploits are running rampant in the wild then the vendors scramble to develop patches. Not the best business practices all the way around, but it's the way it is.

I think that apparently the vendors aren't doing a damn thing to patch a good amount of these reported vulnerabilities if they are being reported in a proactive manner. Seems as if once the exploits are running rampant in the wild then the vendors scramble to develop patches. Not the best business practices all the way around, but it's the way it is.

It's most likely a case of resource management and insufficient resources available. Businesses exist to make money. Features make money, bugs cost money. So, given NNN amount of money, do you:

A) Fix the bugs that people are experiencing problems with RIGHT NOW with exploits in the wild, or

B) Fix the bugs that are "theoretical" and MAY be exploited at some point in the future if somebody else finds it?

Now, the clueful would note that the set of B includes the set of A, but for those who are living close to the edge, A is where the attention goes, and that's why you see announcements like this one.

It's most likely a case of resource management and insufficient resources available. Businesses exist to make money.

And as long as we keep putting up with shoddy software, they'll continue to sell it to us. Bugs cost money, as you said, so I would think they might put a few more resources to getting rid of the bugs before they shovel it out the door.

Fix the bugs that are "theoretical" and MAY be exploited at some point in the future if somebody else finds it?

You are asserting that the exploit is '"theoretical" (why the quotes?) and might be used in the future without any evidence that this is even the most common case much less the only case. The problem with an undisclosed vulnerability is that unsuspecting users believe they have more security than in fact they do. They expect, at very least, to be informed when a v

Clearly the balance of incentives has been wildly off for some time now. Researchers finding possibly big-cost vulnerabilities and reporting them to vendors/middlemen have found that the responses to their discoveries have been slow. Additionally, the payouts for these researchers has been relatively low.

They've been slow because companies have very little incentive to actually fix these bugs, provided that the rate of exploitation of these bugs is sufficiently low.

Legerov said. For example, he said, “there will be published two years old Realplayer vulnerability soon, which we handled in a responsible way [and] contacted with a vendor.”

I think that apparently the vendors aren't doing a damn thing to patch a good amount of these reported vulnerabilities if they are being reported in a proactive manner. Seems as if once the exploits are running rampant in the wild then the vendors scramble to develop patches

It's most likely a case of resource management and insufficient resources available.

One word can solve the difference between responsible reporting and 0-day motivation:

embargo

The reporting security group still goes through responsible reporting methodology, but add proposed date the details will be reported more fully to the public.

I work for an enterprise-level network device manufacturer, and anyone in that line of work knows damn well that remote vulnerabilities are the harbinger of death if they're not addressed in a timely fashion. Yet, motivation to assign resources to fix it sti

Perhaps this is yet another reason to use only free (libre) software: you do not have to rely on a greedy businessman to decide that a bug is worth fixing. Of course, this means that "responsible disclosure" flies out the window, since you cannot go around keeping bugs secret if you want random people to fix them, but give the content of TFA...

A) Fix the bugs that people are experiencing problems with RIGHT NOW with exploits in the wild, or

B) Fix the bugs that are "theoretical" and MAY be exploited at some point in the future if somebody else finds it?

But how do you know if it's being exploited in the wild or not? Vendors are unlikely to know, security researchers and the anti-virus companies might. The best exploits are written so the end-user doesn't notice anything bad has happened.

I think that apparently the vendors aren't doing a damn thing to patch a good amount of these reported vulnerabilities if they are being reported in a proactive manner. Seems as if once the exploits are running rampant in the wild then the vendors scramble to develop patches. Not the best business practices all the way around, but it's the way it is.

I'd feed better if, rather than lumping all the vendors together and 0-day disclosing vulnerabilities found in any of them, Intevydis tracked which vendors fail

This doesn't sound like either responsible or irresponsible disclosure. It sounds like plain old extortion. Notice he does not say he provided the vendor with the vulnerability info, just that he contacted the vendor. Calling a vendor and saying 'you have a vulnerability, pay me x and I will tell you what it is, don't pay and I'll tell everyone else' is not 'being responsible', it is extortion. Given that he must now resort to a blanket 'from now on I'll just release it' threat he must be getting pretty desperate. Frankly, I have no trouble believing that IBM/Tivoli and Sun/Mysql would not bat an eye at an extortion attempt, but I find it hard to believe they would not fix an actual vulnerability if it was reported as such.

Tell "what does not kill me makes me stronger" to a brain-damaged man in a wheelchair. If there were no attacks, vulns would be little problem. As it is, your AV takes up a good chunk of your computer's resources and the botnets still send tons of spam.

Yes, but it's unrealistic to expect that if researchers didn't publish attacks, there wouldn't be any.

Somebody found the hole. It can't be that they're the only person on the planet who could possibly figure it out. Eventually somebody else will find it too, or maybe already has. If that person happens to have something malicious in mind, they won't publically disclose it. They'll exploit it for their own gain, or sell the information to people who will do that.

Responsible Disclosure is like "pro choice" or "pro life". It is a deliberately positive term for purely demagogic reasons. You can't be for irresponsible disclosure, just like you can't be against choice or against life.

The protocol for publishing information about exploitable software bugs is an intensely debated topic and the choices affect multi-billion dollar businesses where it hurts them most: The bottom line. Do not for a second believe that anyone in this game argues for the sake of rational discourse alone.

...usually. Sometimes pro-life can mean they want you to "choose life".Although that's not the way it usually goes since the noisiest part ofthe "pro life" crowd are fundie nutbags want to meddle in everyone'slives.

The irresponsible party in this case, is the software vendor. If the vendor can't clean up their act, and at least work on fixing 0-day exploits, then public disclosure/humiliation is probably a good way to get at least some vendor to sit up, take note and do the right thing the next time around.

This sounds like a good case for establishing a procedure.

1. Contact vendor about exploit, with an expiry date.2. Release information about exploit once date has expired, irrespective of whether bug is fixed, and the fix deployed.

The term "responsible disclosure" is newspeak for "keep your mouth shut". The alternative to 'responsible disclosure' is that the vulnerabilties continue to exist for sometimes years, with wild exploits happening perhaps unknown for long periods of time.

I think it's okay to notify the company and give them time to fix the bug, but time on the order of years is completely unreasonable. On the Internet, a year is a very, very long time.

tl;dr: Of course I prefer the company fixing the bug, but in case they fail at that, I at least want to know of it and be on the same level as the crackers.

You got something wrong: The position of the crackers is that it’s the companies who act irresponsibly, e.g. by doing nothing when they should close the bugs, or by suing those who found some hole. Which I agree with. I’d go so far as to offer a prize to anyone who can demonstrate an exploit for my software. With that prize always being worth

Agreed - inform the vendor with all the details. Same day, publicly announce that the vulnerability has been discovered, but with no details. At a specified date (60-90 days later) make full details public.