Mr.
Palmer is the President and Chief Executive Officer of Blue Lane
Technologies. Previously, Jeff served as President of GetThere, the
leading online corporate travel procurement solution, based in Menlo
Park, CA. While at the company, GetThere grew to become the largest
factor in the corporate online travel market, holding 70% market share
among the top 100 US corporations, #1 share in the overall market with
more than 3000 customers worldwide, and transaction volumes exceeding
one million per month. GetThere went public on the NASDAQ in 1999 and
was acquired by Sabre Holdings (NYSE: TSG) for $750 million in 2000.

Earlier in his career, Mr. Palmer served in a variety of general
management and marketing officer roles at Tristrata Security, Memco
Software, Pilot Software and BBN, establishing experience in enterprise
networking, security, and applications software.

Mr. Palmer holds a Bachelor of Science degree from the Massachusetts
Institute of Technology and an MBA from Harvard Business School.

Allwyn Sequeira is Senior Vice President of Product Operations at Blue
Lane Technologies, responsible for managing the overall product life
cycle, from concept through research, development and test, to delivery
and support. He was previously the Senior Vice President of Technology
and Operations at netVmg, an intelligent route control company acquired
by InterNap in 2003, where he was responsible for the architecture,
development and deployment of the industry-leading flow control
platform. Prior to netVmg, he was founder, Chief Technology Officer and
Executive Vice President of Products and Operations at First Virtual
Corporation (FVC), a multi-service networking company that had a
successful IPO in 1998. Prior to FVC, he was Director of the Network
Management Business Unit at Ungermann-Bass, the first independent local
area network company. Mr. Sequeira has previously served as a Director
on the boards of FVC and netVmg.

Mr. Sequeira started his career as a software developer at HP in the
Information Networks Division, working on the development of TCP/IP
protocols. During the early 1980's, he worked on the CSNET project, an
early realization of the Internet concept. Mr. Sequeira is a recognized
expert in data networking, with twenty five years of experience in the
industry, and has been a featured speaker at industry leading forums
like Networld+Interop, Next Generation Networks, ISP Con and RSA
Conference.

Mr. Sequeira holds a Bachelor of Technology degree in Computer
Science from the Indian Institute of Technology, Bombay, and a Master
of Science in Computer Science from the University of Wisconsin,
Madison.

Allwyn, despite all this good schoolin' forgot to send me a picture, so he gets what he deserves ;)(Ed: Yes, those of you quick enough were smart enough to detect that the previous picture was of Brad Pitt and not Allwyn. I apologize for the unnecessary froth-factor.)

Questions:

1) Blue Lane has two distinct product lines, VirtualShield and PatchPoint. The former is a software-based solution which provides protection for VMware Infrastructure 3 virtual servers as an ESX VM plug-in whilst the latter offers a network appliance-based solution for physical servers. How are these products different than either virtual switch IPS' like Virtual Iron or in-line network-based IPS's?

IPS technologies have been charged with the incredible mission of tryingto protect everything from anything. Overall they've done well, considering how much the perimeter of the network has changed and how sophisticated hackers have become. Much of their core technology, however, was relevant and useful when hackers could be easily identified by their signatures. As many have proclaimed, those days are coming to an end.

A defense department official recently quipped, "If you offer the same protection for your toothbrushes and your diamonds you are bound to losefewer toothbrushes and more diamonds." We think that data center security similarly demands specialized solutions. The concept of an enterprise network has become so ambiguous when it comes to endpoints and devices and supply chain partners, etc. we think its time to think more realistically in terms of trusted, yet highly available zones within the data center.

It seems clear at this point that different parts of the network need very different security capabilities. Servers, for example need highly accurate solutions that do not block or impede good traffic and can correct bad traffic, especially when it comes to closing network-facing vulnerability windows. They need to maintain availability with minimal latency for starters; and that has been a sort of Achilles heel for signature-based approaches. Of course, signatures also bring considerable management burdens over and beyond their security capabilities.

No one is advocating turning off the IPS, but rather approaching servers with more specialized capabilities. We started focusing on servers years ago and established very sophisticated application and protocol intelligence, which has allowed us to correct traffic inline without the noise, suspense and delay that general purpose network security appliance users have come to expect.

IPS solutions depend on deep packet inspection typically at the perimeter based on regexp pattern matching for exploits. Emerging challenges with this approach have made alert and block modes absolutely necessary as most IPS solutions aren't accurate enough to be trusted in full library block.

Blue Lane uses a vastly different approach. We call it deep flow inspection/correction for known server vulnerabilities based on stateful decoding up to layer 7. We can alert, block and correct, but most of are deployments are in correct mode, with our full capabilities enabled.From an operational standpoint we have substantially different impacts.

A typical IPS may have 10K signatures while experts recommend turning on just a few hundred. That kind of marketing shell game (find out what really works) means that there will be plenty of false alarms, false positives and negatives and plenty of tuning. With polymorphic attacks signature libraries can increase exponentially while not delivering meaningful improvements in protection.

Blue Lane supports about 1000 inline security patches across dozens of very specific server vulnerabilities, applications and operating systems. We generate very few false alarms and minimal latency. We don't require ANY tuning. Our customers run our solution in automated, correct mode.

The traditional static signature IPS category has evolved into an ASIC war between some very capable players for the reasons we just discussed.Exploding variations of exploits and vectors means that exploit-centric approaches will require more processing power.

Virtualization is pulling the data center into an entirely different direction, driven by commodity processors. So of course our VirtualShield solution was a much cleaner setup with a hypervisor; we can plug into the hypervisor layer and run on top of existing hardware, again with minimal latency and footprint.

You don't have to be a Metasploit genius to evade IPS signatures. Our higher layer 7 stateful decoding is much more resilient.

2) With zero-days on the rise, pay-for-play vulnerability research and now Zero-Bay (WabiSabiLabi) vulnerability auctions and the like, do you see an uptake in customer demand for vulnerability shielding solutions?

Exploit-signature technologies are meaningless in the face of evanescent, polymorphic threats, resulting in 0-day exploits. Slight modifications to signatures can bypass IPSes, even against known vulnerabilities. Blue Lane technology provides 0-day protection for any variant of an exploit against known vulnerabilities. No technology can provide ultimate protection against 0-day exploits based on 0-day vulnerabilities. However, this requires a different class of hacker.

3) As large companies start to put their virtualization strategies in play, how do you see customers addressing securing their virtualized infrastructure? Do they try to adapt existing layered security methodologies and where do these fall down in a virtualized world?

I've explored this topic in depth at the Next Generation Data Center conference last week. Also, your readers might be interested in listening to a recent podcast: The Myths and Realities of Virtualization Security: An Interview.

To summarize, there are a few things that change with virtualization, that folks need to be aware of. It represents a new architecture. The hypervisor layer represents the un-tethering and clustering of VMs, and centralized control. It introduces a new virtual network layer. There are entirely new states of servers, not anticipated by traditional static security approaches (like instant create, destroy, clone, suspend, snapshot and revert to snapshot).

Then you'll see unprecedented levels of mobility and new virtual appliances and black boxing of complex stacks including embedded databases. Organizations will have to work out who is responsible for securing this very fluid environment. We'll also see unprecedented scalability with Infiniband cores attaching LAN/SAN out to 100's of ESX hypervisors and thousands of VMs.

Organizations will need the capability to shield these complex, fluid environments; because trying to keep track of individual VMs, states, patch levels, locations will make tuning an IPS for polymorphic attacks look like childs play in comparison. Effective solutions will need to be highly accurate, low latency solutions deployed in correct mode. Gone will be the days of man-to-man blocking and tuning. Here to stay are the days of zone defense.

4) VMware just purchased Determina and intends to integrate their memory firewall IPS product as an ESX VM plug-in. Given your early partnership with VMware, are you surprised by this move? Doesn't this directly compete with the VirtualSheild offering?

I wouldn't read too much into this. Determina hit the wall on sales, primarily because it's original memory wall technology was too intrusive, and fell short of handling new vulnerabilities/exploits.

This necessitated the LiveShield product, which required ongoing updates, destroying the value proposition of not having to touch servers, once installed. So, this is a technology/people acquisition, not a product line/customer-base acquisition.

VMware was smart to get a very bright set of folks, with deep memory/paging/OS, and a core technology that would do well to be integrated into the hypervisor for the purpose of hypervisor hardening, and interVM isolation. I don't see VMware entering the security content business soon (A/V, vulnerabilities, etc.). I see Blue Lane's VirtualShield technology integrated into the virtual networking layer (vSwitch), as a perfect complement to anything that will come out of the Determina acquisition.

5) Citrix just acquired XenSource. Do you have plans to offer VirtualShield for Xen?

A smart move on Citrix's part to get back into the game. Temporary market caps don't matter. Virtualization matters. If Citrix can make this a two or three horse race, it will keep the VMware, Citrix, Microsoft triumvirate on their toes, delivering better products, and net good for the customer.

Regarding BlueLane, and Citrix/Xensource, we will continue to pay attention to what customers are buying as they virtualize their data centers. For now, this is a one horse show :-)

Good, bad or indifferent, one would be blind not to recognize that these services are changing the landscape of vulnerability research and pushing the limits which define "responsible disclosure."

It was only a matter of time until we saw the mainstream commercial emergence of the open vulnerability auction which is just another play on the already contentious marketing efforts blurring the lines between responsible disclosure for purely "altruistic" reasons versus commercial gain.

This auction marketplace for vulnerabilities is marketed as a Swiss "...Laboratory & Marketplace Platform for Information Technology Security" which "...helps customers defend their databases, IT infrastructure, network, computers, applications, Internet offerings and access."

Despite a name which sounds like Mushmouth from Fat Albert created it (it's Japanese in origin, according to the website) I am intrigued by this concept and whether or not it will take off.

I am, however, a little unclear on how customers are able to purchase a vulnerability and then become more secure in defending their assets.

A vulnerability without an exploit, some might suggest, is not a vulnerability at all -- or at least it poses little temporal risk. This is a fundamental debate of the definition of a Zero-Day vulnerability.

Further, a vulnerability that has a corresponding exploit but without a countermeasure (patch, signature, etc.) is potentially just as useless to a customer if you have no way of protecting yourself.

If you can't manufacture a countermeasure, even if you hoard the vulnerability and/or exploit, how is that protection? I suggest it's just delaying the inevitable.

I am wondering how long until we see the corresponding auctioning off of the exploit and/or countermeasure? Perhaps by the same party that purchased the vulnerability in the first place?

Today in the closed loop subscription services offered by vendors who buy vulnerabilities, the subscribing customer gets the benefit of protection against a threat that they may not even know they have, but for those who can't or won't pony up the money for this sort of subscription (which is usually tied to owning a corresponding piece of hardware to enforce it,) there exists a point in time between when the vulnerability is published and when it this knowledge is made available universally.

Depending upon this delta, these services may be doing more harm than good to the greater populous.

In fact, Dave G. over at Matasano argues quite rightly that by publishing even the basic details of a vulnerability that "researchers" will be able to more efficiently locate the chunks of code wherein the vulnerability exists and release this information publicly -- code that was previously not known to even have a vulnerability.

Each of these example vulnerability service offerings describes how the vulnerabilities are kept away from the "bad guys" by qualifying their intentions based upon the ability to pay for access to the malicious code (we all know that criminals are poor, right?) Here's what the Malware Distribution Project describes as the gatekeeper function:

Why Pay?

Easy; it keeps most, if not all of the malicious intent, outside the
gates. While we understand that it may be frustrating to some people
with the right intentions not allowed access to MD:Pro, you have to
remember that there are a lot of people out there who want to get
access to malware for malicious purposes. You can't be responsible on
one hand, and give open access to everybody on the other, knowing that
there will be people with expressly malicious intentions in that group.

ZDI suggests that by not reselling the vulnerabilities but rather protecting their customers and ultimately releasing the code to other vendors, they are giving back:

The Zero Day Initiative (ZDI) is unique in how the acquired
vulnerability information is used. 3Com does not re-sell the
vulnerability details or any exploit code. Instead, upon notifying the
affected product vendor, 3Com provides its customers with zero day
protection through its intrusion prevention technology. Furthermore,
with the altruistic aim of helping to secure a broader user base, 3Com
later provides this vulnerability information confidentially to
security vendors (including competitors) who have a vulnerability
protection or mitigation product.

As if you haven't caught on yet, it's all about the Benjamins.

We've seen the arguments ensue regarding third party patching. I think that this segment will heat up because in many cases it's going to be the fastest route to protecting oneself from these rapidly emerging vulnerabilities you didn't know you had.