Why does it seem like there isn't really much ethics in security research?

I often see security researchers discover a new technique that can be used for breaking into computers or evading detection and the goal is supposedly to force white hats to fix the problem that didn't exist until the researcher created it. A vulnerability is a known weakness, and if black hats aren't using it then developing tools and techniques that black hats can use, even if you provide a fix is just making the defense more complicated. It's one more attack we have to look out for.

I don't think we'll ever see a black hat like a carder discover a new technique law enforcement could use to catch them and then make it public in an effort to force other black hats to fix the problem.

If you are a security researcher and you keep your research to yourself, no one knows about the attack or how to detect it. In releasing information about the attack you are creating awareness of the attack which in turn creates awareness of how to detect/prevent the attack.

I'm sure there are a number of times when a "whitehat" researcher provided valuable, previously unknown attack information to "blackhats." But I think more often than not, the "whitehat" is providing the general public information about attacks that are already going on.

Regarding your definition of a vulnerability being a known weakness, I would argue with that. You are vulnerable whether you know about it or not.

ziggy_567 wrote:If you are a security researcher and you keep your research to yourself, no one knows about the attack or how to detect it. In releasing information about the attack you are creating awareness of the attack which in turn creates awareness of how to detect/prevent the attack.

I'm sure there are a number of times when a "whitehat" researcher provided valuable, previously unknown attack information to "blackhats." But I think more often than not, the "whitehat" is providing the general public information about attacks that are already going on.

If there is already evidence of the technique being used in the wild then yeah I'm all for creating awareness and fixing it, but a lot of times it's a new technique that even the researchers may say they don't think is occurring in the wild.

ziggy_567 wrote:Regarding your definition of a vulnerability being a known weakness, I would argue with that. You are vulnerable whether you know about it or not.

It's not about whether I know about the vulnerability, it's about no one at all knowing about the vulnerability. If no one knows about the vulnerability then there is no one to exploit it. There very well may be a technique to turn lead into gold discovered someday, but at the moment we aren't vulnerable to that because no one is doing it because no one even knows how.

I'd rather know about it before the bad guys find out so that I can defend proactively.

If there weren't "whitehat" researchers out there doing research and publicly disclosing it, we would only be able to reactively defend our networks. You get into the same type of rat race that AV vendors are in with signature based detection. You can't detect until you have a sample. When you collect a sample, you're already owned and you will continue to be owned up until the point when the AV vendor gets a signature out.

Also, how do you determine what other people know and don't know? When a researcher states that he/she doesn't think that their discovered vulnerability isn't being exploited in the wild, that's conjecture on their part. It also doesn't take into account whether or not some "blackhat" researcher isn't on the verge of making the same discovery.

Last edited by ziggy_567 on Tue Feb 14, 2012 1:08 pm, edited 1 time in total.

ziggy_567 wrote:I'd rather know about it before the bad guys find out so that I can defend proactively.

If there weren't "whitehat" researchers out there doing research and publicly disclosing it, we would only be able to reactively defend our networks. You get into the same type of rat race that AV vendors are in with signature based detection. You can't detect until you have a sample. When you collect a sample, you're already owned and you will continue to be owned up until the point when the AV vendor gets a signature out.

Being proactive is important in almost everything, but I don't think it's appropriate when it comes to security researchers developing new tools and techniques that are going to be used against us. They're fixing a problem that didn't exist until they discovered it. Even if they offer some solutions for mitigating the attack (sometimes they don't even do that), it's still just mitigating it. It's making the defense more complicated because it's yet another thing we have to defend against.

I think security researschers should focus on researching attacks that are currently being used in the wild and help find ways to defend computers rather than create new ways to attack computers. If you're in a war you don't want to develop new weapons that are only going to be used against you just for the sake of being proactive incase the other country might of created the weapons on their own.

ziggy_567 wrote:Also, how do you determine what other people know and don't know? When a researcher states that he/she doesn't think that their discovered vulnerability isn't being exploited in the wild, that's conjecture on their part. It also doesn't take into account whether or not some "blackhat" researcher isn't on the verge of making the same discovery.

You can never prove a negative, but they can research it... they should be good at it. Google, network with other security researchers, analyze comunication, infiltrate hacker chat rooms, reverse engineer malware, honeypots, Web Server Log Project, etc.

Why are you under the impression that there are no ethics in security research? From my perspective, it seems like most people try to adhere to responsible disclosure procedures. Maybe you're not hearing about those because they're not major news (i.e. quietly being credited in a patch report). Irresponsible disclosure seems like a surefire way to burn bridges in the industry, and most professionals are looking to further their career, not sink it. Some people may only be after notoriety, but I do not think they are the majority.

I think it's foolish to assume that no one else knows of a vulnerability. Like Ziggy said, if a vulnerability exists, it's a vulnerability regardless of how many people know about it. I know people that discover dozens by just letting fuzzers run in the background. If there are hundreds or thousands of others doing that as well, more than one person will stumble upon the same vulnerability sooner or later.

Maybe you only identify it as a DoS vulnerability while someone else has nearly completed a stable exploit for it. Is it ethical to withhold information until a vulnerability is being widely exploited? What if the vulnerability is being exploited in targeted attacks and isn't shown as "active in the wild?"

If the vendor can't/won't patch it in a timely manner, it's still beneficial to notify AV, IPS, and similar vendors that can compensate by beefing up other security controls. Likewise, administrators may be able to take steps to protect themselves as well (i.e. disabling a non-essential service that has a critical remotely-exploitable vulnerability).

Going to side with Dynamik and Ziggy, if they are not doing the research and letting us know, then we are at a disadvantage when it comes to defending. There is a big debate about Full Disclosure and when it is appropriate. If the vendors won't fix the issue, then we are left to use what tools we can to better protect against the vulnerability.

hmmm all of a sudden I am getting a vision of a white-hat paraphrasing Jack Nicholson from a Few Good Men

might go something like this...

CISSP: *Colonel Jessep, did you order the Code written?* Judge Randolph: You *don't* have to answer that question! White-Hat Researcher: I'll answer the question! [to CISSP] White-Hat Researcher: You want exploits? CISSP: I think I'm entitled to. White-Hat Researcher: *You want exploits!?!?* CISSP: *I want the vulnerability!* White-Hat Researcher: *You can't handle the vulnerability!* [pauses] White-hat Researcher: Son, we live in a world that has firewalls, and those firewalls have to be guarded by Network Admins with tools. Who's gonna do it? You? You, Symantec? I have a greater responsibility than you could possibly fathom. You weep for RSA, and you curse the ISC2. You have that luxury. You have the luxury of not knowing what I know. That RSA's breach, while tragic, probably saved networks. And my existence, while grotesque and incomprehensible to you, saves networks. You don't want the truth because deep down in places you don't talk about at parties, you want me managing that firewall, you need me managing that firewall. We use words like APT, code, zero-day. We use these words as the fiber backbone of a life spent defending something. You use them at a CIO luncheon. I have neither the time nor the inclination to explain myself to a man who surfs the ner under the blanket of the very security that I provide, and then questions the manner in which I provide it. I would rather you just said thank you, and went on your way, Otherwise, I suggest you pick up a network analyzer, and configure a spanned port. Either way, I don't give a damn what you think you are entitled to. CISSP: Did you order the Code written? White-Hat Researcher: I did the job I... CISSP: *Did you order the Code written!?* White-Hat Researcher: *You're Goddamn right I did!*

put together in a rush, CISSP because ISC2 would consider exploit development and sharing it with the public against their COE, but there is debate on that as well.

dynamik wrote:Why are you under the impression that there are no ethics in security research? From my perspective, it seems like most people try to adhere to responsible disclosure procedures. Maybe you're not hearing about those because they're not major news (i.e. quietly being credited in a patch report). Irresponsible disclosure seems like a surefire way to burn bridges in the industry, and most professionals are looking to further their career, not sink it. Some people may only be after notoriety, but I do not think they are the majority.

I think it's foolish to assume that no one else knows of a vulnerability. Like Ziggy said, if a vulnerability exists, it's a vulnerability regardless of how many people know about it. I know people that discover dozens by just letting fuzzers run in the background. If there are hundreds or thousands of others doing that as well, more than one person will stumble upon the same vulnerability sooner or later.

Maybe you only identify it as a DoS vulnerability while someone else has nearly completed a stable exploit for it. Is it ethical to withhold information until a vulnerability is being widely exploited? What if the vulnerability is being exploited in targeted attacks and isn't shown as "active in the wild?"

If the vendor can't/won't patch it in a timely manner, it's still beneficial to notify AV, IPS, and similar vendors that can compensate by beefing up other security controls. Likewise, administrators may be able to take steps to protect themselves as well (i.e. disabling a non-essential service that has a critical remotely-exploitable vulnerability).

I wasn't suggesting all security researchers are unethical. And it's not about the type of disclosure, it's that a lot of researchers seem to focus their research on breaking security rather than fixing security. For example:

1. A researcher who creates a new technique for malware to evade detection and not even provide a single way to detect it.

2. The researchers who discovered a critical flaw in BGP, one they say wasn't being used in the wild and that they have no idea how to fix it; yet told people about it anyway. No one was using it, no ways to mitigate it and now blackhats know about it. I don't think we're better off now.

3. Researcher who gives a presentation on a new stealthy botnet he created. A guy giving a lecture on new ways to keep a botnet from being detected doesn't help defend computers.

4. Researchers who develop new anti-forensic tools & techniques. Helping attackers get away with crimes is not helping defend computers.

etc.

Say the US signed a treaty to not use land mines. Do you really think it would be a good idea for the US to publish research on how they created a new and improved land mine that will be used against them? Sure you could say you're being proactive, but over all you're going to be a lot worse off.

Last edited by Eleven on Tue Feb 14, 2012 2:54 pm, edited 1 time in total.

I will agree that providing fixes along with what is broken provides significant value, but raising awareness that something is broken may help someone else devise a solution on the back of the breaker's research or help organizations avoid unnecessarily insecure implementations of the flawed technology in upcoming projects. If I don't know something is possible (like anti-forensics) I will make flawed assumptions about what is happening on my network. What if I did not know about the latest cool antiforensic technique and I made a rapid judgement based on faulty information that cost some poor shlub his job, or sent him to jail? I just might be able to compensate for these failures if I have enough information to understand how the attack is carried out or at least be able to educate management so we don't jump to conclusions. Maybe that malware evasion technique could be detected in other ways not previously thought of, maybe I could write a Snort signature to detect it. But if as a security researcher I don't know about the evasion technique, how would I even know where to start my research or even understand it was necessary? Burying your head in the sand does not further our collective knowledge. Understanding of real world techniques does. This whole thread comes off like a huge troll. I am truly astounded, do we really need to have this discussion in 2012?

tturner wrote:I will agree that providing fixes along with what is broken provides significant value, but raising awareness that something is broken may help someone else devise a solution on the back of the breaker's research or help organizations avoid unnecessarily insecure implementations of the flawed technology in upcoming projects. If I don't know something is possible (like anti-forensics) I will make flawed assumptions about what is happening on my network. What if I did not know about the latest cool antiforensic technique and I made a rapid judgement based on faulty information that cost some poor shlub his job, or sent him to jail? I just might be able to compensate for these failures if I have enough information to understand how the attack is carried out or at least be able to educate management so we don't jump to conclusions. Maybe that malware evasion technique could be detected in other ways not previously thought of, maybe I could write a Snort signature to detect it. But if as a security researcher I don't know about the evasion technique, how would I even know where to start my research or even understand it was necessary? Burying your head in the sand does not further our collective knowledge. Understanding of real world techniques does. This whole thread comes off like a huge troll. I am truly astounded, do we really need to have this discussion in 2012?

</rant>

As I said, a vulnerability has to be known to be a problem. There is a difference between not knowing "the latest cool antiforensic technique" that is currently known and being used in the wild, and a security researcher creating new, previously unknown anti-forensic techniques that helps attackers get away with crimes.

Security is a journey, not a destination. So when a security researcher creates a new anti-forensic technique in an effort to force the forensic community to fix a vulnerability that wasn't being exploited in the first place, and not even give solutions to the problem they've discovered; the researcher is just adding weight to our back as we go on the never ending journey of trying to reach security.

I think my land mine analogy applies to this situation pretty well.

Last edited by Eleven on Tue Feb 14, 2012 4:45 pm, edited 1 time in total.

Do you seriously think that whitehat security researchers are leading the curve here? There may be pockets of genius that generate additional risk due to teaching blackhats techniques they did not know about, but by and large we are playing catchup. The bad guys have enormous resources to deploy here, far more than we can hope to match. Just because we don't know about that cool new antiforensic technique does not mean that it is not being utilized already. It just means we are blind to its usage. It's not like the bad guys share all their tips and tricks in some secret club. They are monetizing these attacks and just like any corporate IP, it's often a trade secret. Sometimes you see them bundled in exploit packs and sold to other criminal groups, but the really juicy stuff is kept closely guarded from what I've seen. What you are suggesting is highly dangerous and would be a huge step backwards in the progress we have been fighting for years with regards to information sharing.

If I discover an attack technique; I don't know if it's really new or not. I can look at previously published research, but that still doesn't tell the whole story. Black hats may have already discovered the technique and could be using it, but not sharing it publicly. Organized crime or government agencies may already know about the technique and they certainly are not going to share it. If I publish my results, perhaps after working with vendors, I can help people to move forward and start eliminating the problem. If I keep silent, I'm not helping and I won't know whether the technique is being used or not.

Even if I knew (through omniscience) that nobody in the world was aware of my technique, it could still be a bad idea to sit on my information. The problem is that software is vulnerable to my attack technique whether I share the details or not. By the time someone else discovers and publishes it, there may be more vulnerable applications in the world than when I discovered it. This is especially true if my technique affects an emerging technology or one that is likely to be leveraged by other software.

For the researcher personally, if he can't publish there is no point in doing research. Sure, he could focus on pure defense, but he's mostly ignoring the attack side or taking a passive approach to it. This leads to a skill imbalance between the white hats and the black hats. If the black hats are the only ones who know how to break systems, or who can discover new techniques, then the white hats are falling behind.

tturner wrote:Do you seriously think that whitehat security researchers are leading the curve here? There may be pockets of genius that generate additional risk due to teaching blackhats techniques they did not know about, but by and large we are playing catchup. The bad guys have enormous resources to deploy here, far more than we can hope to match. Just because we don't know about that cool new antiforensic technique does not mean that it is not being utilized already. It just means we are blind to its usage. It's not like the bad guys share all their tips and tricks in some secret club. They are monetizing these attacks and just like any corporate IP, it's often a trade secret. Sometimes you see them bundled in exploit packs and sold to other criminal groups, but the really juicy stuff is kept closely guarded from what I've seen. What you are suggesting is highly dangerous and would be a huge step backwards in the progress we have been fighting for years with regards to information sharing.

No, I don't think security researchers are leading the curve, I think they sometimes do research that benefits blackhats more than whitehats.

I'm aware that there are tools and techniques out there blackhats are using we don't know about. That's why part of security research should be threat intelligence.

Information sharing is great. The problem is if the information like anti-forensics consists of problems without any solutions then I don't think the "research" has helped us. It's helped the bad guys.

I think the premise of your argument is flawed. How is security supposed to be improved if it isn't shown to have flaws? If good/neutral parties aren't doing the research, that means that the only ones doing it will be the attackers (and they are; their livelihoods depend on it). This would consequently put us in a perpetual reactive state. I don't see how you believe that is a favorable alternative. You're criticizing researchers for not providing better alternatives, but you're not doing that either.

I also don't think your landmine analogy is apt. It would be more appropriate to compare this item to someone discovering how to make more powerful/effective explosives, and then you complaining that it's a terrible discovery because it could be used in landmines against us.

Just because a technology can be abused, doesn't mean it's a bad technology. TOR, TrueCrypt, steganography, etc. can certainly be used to cover up crimes. However, they can also be used by individuals persecuted under oppressive governments. Should everyone be deprived of benefits just because there will be some who abuse the technology?

dynamik wrote: This would consequently put us in a perpetual reactive state.

Key phrase is reactive. We are already at a disadvantage since many corporations do not want to invest in Information Security until a problem is discovered. By that time the issue could have been present for months without being detected. Typically in a targeted attack, the tools used are brand new exploits and malware. They are customized for that victim network and will very rarely appear elsewhere. They may utilize a similar vulnerability but the signature is so different that AV products will not detect it and IPS signatures may not see the traffic since it designed to look like normal chatter.

We need the researchers to find these things in the hopes we can start becoming more proactive. There have been some 0Days released that had been around for years. So the question would be, how long has the vendor known about it and how long have the blackhats known about it and used it? Should the researchers also figure how to prevent these attacks? Sure, but honestly, that should be for the software vendor to determine since the researcher may not have access to the source code to figure out how to fix the issue. Many pen testers and vulnerabilty assessors will inform the clients how to better protect against the issues they find during the test. Some fixes involve reconfiguring the devices and software and some involve patching. Some exploits cannot be protected against but you can prevent an attacker from getting to the point where they can use the exploit.

As for the tools, there has been talk by many government bodies about making the use of "hacking tools" illegal in the hopes that it would deter hackers from performing their attacks. So what that would do is take the tools out of the hands of the good guys and the bad guys will continue to use them and develop them. Remember we have to abide by laws of our countries, that is what separates us from the blackhats.

The Superman would never kill Lex Luther but Lex would attempt numerous times to kill Superman (up until all the What-ifs and such - I miss comics sometimes). That is what separated the hero from the villian. He could end Lex anytime he wanted but then he would be no better than Lex. Same goes for Blackhats vs Whitehats, we could easily start developing and deploying malware against the blackhats of our own free will but if we are not working under the orders of the military or law enforcement, then we become vigilantes and our creations could end up working against us. We are trying to keep the malware and exploits down, not increase them.