Slashdot videos: Now with more Slashdot!

View

Discuss

Share

We've improved Slashdot's video section; now you can view our video interviews, product close-ups and site visits with all the usual Slashdot options to comment, share, etc. No more walled garden! It's a work in progress -- we hope you'll check it out (Learn more about the recent updates).

msm1267 writes "There are a lot of echoes of the disclosure debate in the current discussions about vulnerability exploit sales. The commercial exploit market has developed relatively quickly, at least the public portion of it. Researchers have been selling vulnerabilities to a variety of buyers – government agencies, contractors, other researchers and third-party brokers – for years. But it was done mostly under cover of darkness. Now, although the transactions themselves are still private, the fact that they're happening, and who's buying (and in some cases, selling) is out in the open. As with the disclosure debate, there are intelligent people lining up on both sides of the aisle and the discussion is generating an unprecedented level of malice."

One difference this time around is that there are large piles of currency involved, not to mention the privacy, security–and in some cases, physical security–of people in countries around the world. Governments are buying exploits and using them for a variety of purposes. Some are using them to spy on their own citizens, while others are using them to attack their enemies’ networks. And government contractors and other private buyers are purchasing them for their own uses, as well.

Agreed. This is an opinion piece by some unknown author on some unknown site (to me). I found no bio info for the author on their site and question the professionalism of websites that cannot figure out how to update their copyright date (2011).

The only interesting exploit is one that hasn't been patched, right? So anyone who discovers, sells, or buys an exploit knows of a vulnerability and is choosing not to disclose it.

By not disclosing a vulnerability, you are allowing others to be vulnerable. It's hard to argue that this is ethical behavior...

Here's an analogy: what if, for every nuke the U.S. destroyed, a nuke disappeared from every other nuclear arsenal in the world? That's what it's like.. by keeping a vulnerability secret, it can be used against anyone using the software. By disclosing the vuln, everyone can patch, disable, or protect the vulnerable software.

It is kind of a win win. Either people who stand to lose from vulnerabilities buy or people who stand to gain from vulnerabilities will. If the practise is made illegal it will simply drive disclosers underground, which will greatly weaken all sorts of organisations as the proportion of hostile buyers increases immeasurably.

Sounds like the free market to me, buyers and sellers auctioning off products in a competitive environment. Perhaps corporations with their billions of quarterly profits can reinvest that money into buying exploits so they can fix them.

So long as people *CAN* patch / disable / protect the vulnerable software.

With the rise of things like locked / encrypted bootloaders, appstores, and lack of updates without a new hardware purchace, I'd say that idea will soon be (if not already) restricted to a very small class of citizens.(I.e only those who would care about such things. The majority will just roll over and take it, as usual.)

That and if the summary is to be beleved, I would also imagine that the governments of the world will want to outl

I don't see how that is definitely unethical. For example, must antivirus vendors disclose the signatures used to identify infection? Seems unethical by your sentence. If a brick-n-mortar finds methodologies to reduce loss, must they share the idea? Ditto automotive improvements? Hell, for that matter, if I come up with a whiz-bang idea for improving a car, can I sell it selectively to manufacturer

Here's the counter argument. Let's say you accidentally discover a vulnerability in a bank's web site by mistyping a URL and you ended up at a different customer's account. You write up your finding, and you privately send it to the bank's security team and ask them for nothing in return other than that they act quickly to protect your account. And let's say they turn around and accuse you of hacking them under the Computer Fraud and Abuse Act, and they provide your own written report to the Secret Service as evidence against you? Who is the ethical party?

How would money alter the ethics? If you gave them the details of the flaw and asked the bank for a $1,000 reward, would that change things? What if you offered to tell the bank of the flaw in exchange for $1,000? If they don't pay, are you ethically bound to not sell the vulnerability to a third party?

What if you don't know of any specific flaw in your bank's site, but you would like to make some side money as a pen tester; so you send them a letter asking if they have a "pay for vulnerability policy", and they respond by placing a hold on your account and calling in the Secret Service? Who is acting ethically in that scenario?

What if you fear retribution so you ask this question anonymously? Are you more or less suspicious to the bank? Should they be more or less likely to seek your prosecution?

What if you exploit the vulnerability personally to view Paris Hilton's bank balance, but you don't do anything malicious to her account? What if you disclose that balance information to the tabloids? What about viewing the bank data of a non-celebrity?

And if not the bank, which third party might you sell it to? A security researcher? A competing bank? Microsoft? A hacker? Some random alias on darkode?

Different people are likely to view these behaviors differently, including banks, law enforcement, hackers, computer security professionals, lawmakers, bank customers, and the general public. Different legal cases with different judges are likely to interpret these differently, as well.

There are few clear cut lines standing out among these questions that say "here are the exact boundaries of ethical behavior."

Let's say you accidentally discover a vulnerability in a bank's web site by mistyping a URL and you ended up at a different customer's account. You write up your finding, and you privately send it to the bank's security team and ask them for nothing in return other than that they act quickly to protect your account. And let's say they turn around and accuse you of hacking them under the Computer Fraud and Abuse Act, and they provide your own written report to the Secret Service as evidence against you? Who is the ethical party?
How would money alter the ethics? If you gave them the details of the flaw and asked the bank for a $1,000 reward, would that change things? What if you offered to tell the bank of the flaw in exchange for $1,000? If they don't pay, are you ethically bound to not sell the vulnerability to a third party?
What if you don't know of any specific flaw in your bank's site, but you would like to make some side money as a pen tester; so you send them a letter asking if they have a "pay for vulnerability policy", and they respond by placing a hold on your account and calling in the Secret Service? Who is acting ethically in that scenario?
What if you fear retribution so you ask this question anonymously? Are you more or less suspicious to the bank? Should they be more or less likely to seek your prosecution?
What if you exploit the vulnerability personally to view Paris Hilton's bank balance, but you don't do anything malicious to her account? What if you disclose that balance information to the tabloids? What about viewing the bank data of a non-celebrity?
And if not the bank, which third party might you sell it to? A security researcher? A competing bank? Microsoft? A hacker? Some random alias on darkode?
Different people are likely to view these behaviors differently, including banks, law enforcement, hackers, computer security professionals, lawmakers, bank customers, and the general public. Different legal cases with different judges are likely to interpret these differently, as well.

I'm sorry, I don't see a single one of those as vague ethical quandaries. You seem to be confusing "ethics" with "what a bank/government would do in today's litigious, paranoid, and ignorant society."
To answer your questions in order though:

1) You were acting ethically, the bank was not.
2a) No, you're now simply asking for a tip for services already rendered.
2b) Dramtically by introducing an artificial and selfishly motivated barrier to aiding those in need.
2c) You are ethically bound to not "sell" t

This is something I am worried about. Politicians are considering security flaws as weapons in their cyber wars. Are we going to see more things like the RSA encyption export ban for exploits? I can see such laws getting really bad really fast.
Will I be able to work with foreign coders to fix bugs, or do I need to report them all to the government? Will I be seen as a terrorist for submitting a pull request for a security feature?

Also, how do you enforce it, w/o nuking everybody's privacy, which so far rightfully so remains the bigger issue. It's not like these entities that are getting hacked are broke and new, they for the most part don't invest properly in mitigation. Remember Sony's play on line getting hacked hard (if i remember correct), and the PR generated from that... Sony hasn't been hacked since in such a high profile attack as far as I know (DDOS doesn't count).

it's clear that reporting a vulnerability to someone in a position to actually fix it (such as the developer of the software) often doesn't work so well. We've seen severe negative effects as they strive to cover up rather than address the vulnerability, attacking the messenger instead. What better way to escalate a bug and get it fixed, than to sell it to the highest bidder and see it get exploited in the field by bad actors?

National Disclosure Centers are only as good as the organizations that take their disclosures.

I worked pretty closely with the DOC CIRT when it was first formed. It did not matter how many CIOs were involved in the process of forming it, or what they agreed to do, or what channels of communications were established. There were always groups that would not / could not work to address issues when they happened.

I don't think passing more laws has much affect on the issue either. Laws are regulatory and fall ve

If the bug is reported to the developer and they do nothing, I don't feel bad for the developer and I can understand why the person who discovered it wants to get paid.
If I lived in a world that didn't require money, it would be different.

There is nothing different between this and the practice of huge companies selling death machines to the militaries of the world, and the occasional non state para military planing a takeover or something - tanks, bomber jets, missiles and so on - how is this any different? the security researchers work to create a product - i.e. a vulnerability, then sell that information, the product of their effort - to a willing customer.

Its a nasty business, you can question the morals and ethics of it, but it really is no different than companies that sell guns and bombs to whatever crackpot thug has a truck full of cash or gold bars...