February 26, 2009

I've got a problem with the escalation of VMware's marketing abuse of the terms "open," "interoperable," and "standards." I'm a fan of VMware, but this is getting silly.

When a vendor like VMware crafts an architecture, creates a technology platform, defines an API, gets providers to subscribe to offering it as a service and does so with the full knowledge that it REQUIRES their platform to really function, and THEN calls it "open" and "interoperable," because an API exists, it is intellectually dishonest and about as transparent as saran wrap to call that a "standard" to imply it is available regardless of platform.

We are talking about philosophically and diametrically-opposed strategies between virtualization platform players here, not minor deltas along the bumpy roadmap highway. What's at stake is fundamentally the success or failure of these companies. Trying to convince the world that VMware, Microsoft, Citrix, etc. are going to huddle for a group hug is, well, insulting.

This recent article in the Register espousing VMware's strategy really highlighted some of these issues as it progressed. Here's the first bit which I agree with:

There is, they fervently say, no other enterprise server and data centre virtualisation play in town. Businesses wanting to virtualise their servers inside a virtualising data centre infrastructure have to dance according to VMware's tune. Microsoft's Hyper-V music isn't ready, they say, and open source virtualisation is lagging and doesn't have enterprise credibility.

Short of the hyperbole, I'd agree with most of that. We can easily start a religious debate here, but let's not for now. It gets smelly where the article starts talking about vCloud which, given VMware's protectionist stance based on fair harbor tactics, amounts to nothing more (still) than a vision. None of the providers will talk about it because they are under NDA. We don't really know what vCloud means yet:

Singing the vcloud API standard song is very astute. It reassures all people already on board and climbing on board the VMware bandwagon that VMware is open and not looking to lock them in. Even if Microsoft doesn't join in this standardisation effort with a whole heart, it doesn't matter so long as VMware gets enough critical mass.

How do you describe having to use VMware's platform and API as VMware "...not looking to lock them in?" Of course they are!

To fully leverage the power of the InterCloud in this model, it really amounts to either an ALL VMware solution or settling for basic connectors for coarse-grained networked capability.

Unless you have feature-parity or true standardization at the hypervisor and management layers, it's really about interconnectivity not interoperability. Let's be honest about this.

By having external cloud suppliers and internal cloud users believe that cloud federation through VMware's vCloud infrastructure is realistic then the two types of cloud user will bolster and reassure each other. They want it to happen and, if it does, then Hyper-V is locked out unless it plays by the VMware-driven and VMware partner-supported cloud standardisation rules, in which case MIcrosoft's cloud customers are open to competitive attack. It's unlikely to happen.

"Federation" in this context really only applies to lessening/evaporating the difference between public and private clouds, not clouds running on different platforms. That's, um, "lock-in."

Standards are great, especially when they're yours. Now we're starting to play games. VMware should basically just kick their competitors in the nuts and say this to us all:

"If you standardize on VMware, you get to leverage the knowledge, skills, and investment you've already made -- regardless of whether you're talking public vs. private. We will make our platforms, API's and capabilities as available as possible. If the other vendors want to play, great. If not, your choice as a customer will determine if that was a good decision for them or not."

Instead of dancing around trying to muscle Microsoft into playing nice (which they won't) or insulting our intelligence by handwaving that you're really interested in free love versus world domination, why don't you just call a spade a virtualized spade.

And by the way, if it weren't for Microsoft, we wouldn't have this virtualization landscape to begin with...not because of the technology contributions to virtualization, but rather because the inefficiencies of single app/OS/hardware affinity using Microsoft OS's DROVE the entire virtualization market in the first place!

Microsoft is no joke. They will maneuver to outpace VMware. HyperV and Azure will be a significant threat to VMware in the long term, and this old Microsoft joke will come back to haunt to VMware's abuse of the words above:

Q: How many Microsoft engineers does it take to change a lightbulb? A: None, they just declare darkness a standard.

My Kindle2 showed up yesterday. I un-boxed it, turned it on and within 3 minutes had downloaded my first book and was reading away (Thomas Barnett's "Great Powers," if you must know.)

So this morning after I checked my email on my other indispensable tool/toy, my iPhone, I realized something was missing from the Kindle: a password.

So you might think "Hoff, why would you need a password for a device that lets you read books?'

Well, while it's true that the majority of users will simply read "off-the-shelf" books/blogs/magazines they download from Amazon.com's storefront on their Kindles, there are a couple of other interesting scenarios that ran through my mind:

To purchase a book using the Kindle, the device is linked to Amazon's One-Click purchase capability. This means that once I choose to purchase a book, I simply click "Buy" and it's delivered to the device, automagically charging my credit card. If I lost my device, someone who found it could literally download hundreds of books to the Kindle on my nickel until I am able to do something about it. This would be short-lived, but really annoying.

It is possible using an Amazon web service to convert documents into the Kindle Format and download them over WhisperNet to your device. Given how convenient this is for reading, imagine what would happen if some crafty person decided to convert and download a sensitive document to the Kindle and then lose the device. Imagine if that document contained PII or other confidential/sensitive information? I wager we'll see a breach notification being issued based on someone losing a Kindle.

Yes, I know it's a piece of "consumer" equipment, but look a little further down the line: college students using it for textbooks and all sorts of other communications, business people using it for reading corporate materials, etc...

I am interested in exploring the following elements in the long term:

An option for password-protected access to the device itself.

A content-rating based password-controlled parental rating system for certain materials. My kids already grabbed my Kindle and (see #1 above) downloaded 3 kids books to it. I may not want them to read certain content.

Remote self-destruct

Encryption of content (at rest, in motion)

Security of Whispernet itself

WiFi (and it's attendant issues)

I'm sure as I dwell on this, there will be other issues that crop up, but the security wonk in me was in full gear this morning.

You have any other security shortcomings or concerns you've thought of re: the Kindle?

February 25, 2009

The World Privacy Forum released their "Cloud Privacy Report" written by Robert Gellman two days ago. It's an interesting read that describes the many facets of data privacy concerns in Cloud environments:

This report discusses the issue of cloud computing and outlines its implications for the privacy of personal information as well as its implications for the confidentiality of business and governmental information. The report finds that for some information and for some business users, sharing may be illegal, may be limited in some ways, or may affect the status or protections of the information shared. The report discusses how even when no laws or obligations block the ability of a user to disclose information to a cloud provider, disclosure may still not be free of consequences. The report finds that information stored by a business or an individual with a third party may have fewer or weaker privacy or other protections than information in the possession of the creator of the information. The report, in its analysis and discussion of relevant laws, finds that both government agencies and private litigants may be able to obtain information from a third party more easily than from the creator of the information. A cloud provider’s terms of service, privacy policy, and location may significantly affect a user’s privacy and confidentiality interests.

I plan to spend some time reading through the report in more depth, but I enjoyed my cursory review thus far, especially some of the coverage related to issues such as FCRA, bankruptcy, Cloud provider ownership, disclosure, etc. Many of these issues are near and dear to my heart.

February 24, 2009

I've written about the really confusing notional definitions that seem to be hung up on where the computing actually happens when you say "Cloud:" in your datacenter or someone else's. It's frustrating to see how people mush together "public, private, internal, external, on-premise, off-premise" to all mean the same thing.

They don't, or at least they shouldn't, at least not within the true context of Cloud Computing.

In the long run, despite all the attempts to clarify what we mean by defining "Cloud Computing" more specifically as it relates to compute location, we're going to continue to call it "Cloud." It's a sad admission I'm trying to come to grips with. So I'll jump on this bandwagon and take another approach.

Cloud Computing will simply become ubiquitous in it's many forms and we are all going to end up with a hybrid model of Cloud adoption -- a veritable mash-up of Cloud services spanning the entire gamut of offerings. We already have today.

Here are a few, none-exhaustive examples of what a reasonably-sized enterprise can expect from the move to a hybrid Cloud environment:

If you're using one or more SaaS vendors who own the entire stack, you'll be using their publicly-exposed Cloud offerings. They manage the whole kit-and-kaboodle, information and all.

SaaS and PaaS vendors will provide ways of integrating their offerings (some do today) with your "private" enterprise data stores and directory services for better integration and business intelligence.

We'll see the simple evolution of hosting/colocation providers add dynamic scalability and utility billing and really push the Cloud mantra.

IaaS vendors will provide (ala GoGrid) ways of consolidating and reducing infrastructure footprints in your enterprise datacenters by way of securely interconnecting your private enterprise infrastructure with managed infrastructure in their datacenters. This model simply calls for the offloading of the heavy tin. Management options abound: you manage it, they manage it, you both do...

Other IaaS players will continue to offer a compelling suite of soup-to-nuts services (ala Amazon) that depending upon your needs and requirements, means you have very little (or no) infrastructure to speak of. You may or may not be constrained by what you can or need to do as you trade of flexibility for conformity here.

Virtualization platform providers will no longer make a distinction in terms of roadmap and product positioning between internal/external or public/private. What is enterprise virtualization today simply becomes "Cloud." The same services, split along virtualization platform party lines, will become available regardless of location.

This means that vendors who today offer proprietary images and infrastructure will start to drive or be driven to integrate more open standards across their offerings in order to allow for portability, interoperability and inter-Cloud scalability...and to make sure you remain a customer.

Even though the Cloud is supposed to abstract infrastructure from your concern as a customer, brand-associated moving parts will count; customers will look for pure-play vetted integration between the big players (networking, virtualization, storage) in order to fluidly move information and applications into and out of Cloud offerings seamlessly

The notion of storage is going to be turned on its head; the commodity of bit buckets isn't what storage means in the Cloud. All the chewy goodness will start to bubble to the surface as value-adds come to light: DeDup, backup, metadata, search, convergence with networking, security...

More client side computing will move to the cloud (remember, it doesn't matter whether it's internal or external) with thin client connectivity while powerful smaller-footprint mobile platforms (smartphones/netbooks) with native virtualization layers will also accelerate in uptake

Ultimately, what powers your Cloud providers WILL matter. What companies adopt internally as their virtualization, networking, application delivery, security and storage platforms internally as they move to consolidate and then automate will be a likely choice when evaluating top-rung weighting when they identify what powers many of their Cloud providers' infrastructure.

If a customer can take all the technology expertise, the organizational and operational practices they have honed as they virtualize their internal infrastructure (virtualization platform, compute, storage, networking, security) and basically be able to seamlessly apply that as a next step as the move to the Cloud(s), it's a win.

The two biggest elements of a successful cloud: integration and management. Just like always.

I can't wait.

/Hoff

*Yes, we're concerned that if "stuff" is outside of our direct control, we'll not be able to "secure" it, but that isn't exactly a new concept, nor is it specific to Cloud -- it's just the latest horse we're beating because we haven't made much gains in being able to secure the things that matter most in the ways most effective for doing that.

What would make you trust "the Cloud"? Scrap that... stupid question...

What would make you trust SaaS providers?

To which I responded:

Generally, my CEO or CFO. :(

I don't "trust" third party vendors with my data. I never will. I simply exercise the maximal amount of due diligence that I am afforded given prevailing time, money, resources and transparency and assess risk from there.

Even if the data is not critical/sensitive, I don't "trust" that it's not going to be mishandled. Not in today's world. (Ed: How I deal with that mishandling is the secret sauce...)

I then got thinking about the line that Ronald Reagan is often credited with wherein he described managing relations with the former Soviet Union:

Trust but verify.

Security professionals use that phrase a lot. They shouldn't. It's oxymoronic.

The very definition of "trust" is:

trust |trəst|noun1 firmbeliefin the reliability, truth, ability, or strength of someone or something: relations have to be built on trust | they have been able to win the trust of the others.• acceptance of the truth of a statement without evidence or investigation : I used only primary sources, taking nothing on trust.• the state of being responsible for someone or something : a man in a position of trust.• poetic/literary a person or duty for which one has responsibility : rulership is a trust from God.• poetic/literary a hope orexpectation: all thegreat trusts of womanhood.

See the second bullet above "....without evidence or investigation"? I don't "trust" people over which I have no effective control. With third parties handling your data, you have no effective "control." You have the capability to audit, assess and recover, but control? Nope.

Does that mean I think you should not put your information into the hands of a third party? Of course not. It's inevitable. You already have. However, admitting defeat and working from there may make Jack a dull boy, but he's also not unprepared for when the bad stuff happens. And it will.

Here's the problem with these generalizations, even when some of the issues these people describe are actually reasonably good points:

Almost all of these references to "better security through Cloudistry" are drawn against examples of Software as a Service (SaaS) offerings. SaaS is not THE Cloud to the exclusion of everything else. Keep defining SaaS as THE Cloud and you're being intellectually dishonest (and ignorant.)

But since people continue to attest to SaaS==Cloud, let me point out something relevant.

There are two classes of SaaS vendors: those that own the entire stack including the platform and underlying infrastructure and those those that don't.

Those that have control/ownership over the entire stack naturally have the opportunity for much tighter control over the "security" of their offerings. Why? because they run their business and the datacenters and applications housed in them with the same level of diligence that an enterprise would.

They have context. They have visibility. They have control. They have ownership of the entire stack.

The HUGE difference is that in many cases, they only have to deal with supporting a limited number of applications. This reflects positively on those who say "Cloud SaaS providers are "more secure," mostly because they have less to secure.

Meanwhile those SaaS providers that simply run their appstack atop someone else's platform and infrastructure are, in turn, at the mercy of their providers. The information and applications are abstracted from the underlying platforms and infrastructure to the point that there is no unified telemetry or context between the two. Further, add in the multi-tenancy issue and we're now talking about trust boundaries that get very fuzzy and hard to define: who is responsible for securing what.

Just. Like. An. Enterprise. :(

Check out the Cloud model below which shows the demarcation between the various layers of the SPI model of which SaaS is but ONE:

The further up the offering stack you go, the more control you have over your information and the security thereof. Oh, and just one other thing. The notion that Cloud offerings diminish attack surfaces is in many cases a good thing for sophisticated attackers as much as it may act as a deterrent. Why? Because now they have a more clearly defined set of attack surfaces -- usually at the application layer -- that makes their job easier.

Next time one of these word monkeys makes a case for how much more secure The Cloud is and references a SaaS vendor like SalesForce.com (a single application) in comparison to an enterprise running (and securing) hundreds of applications, remind them about this and this, both Cloud providers. I wrote about this last year in an article humorously titled "Cloud Providers Are Better At Securing Your Data Than You Are."

Like I said on Twitter this morning "I *love* the Cloud. I just don't trust it. Sort of like why I don't give my wife the keys to my motorcycles."

February 19, 2009

This InformationWeek article took artistic license to lofty new levels in a single sentence as it described the demise of Cloud Computing PaaS vendor Coghead and the subsequent IP/Engineering purchase by SAP:

Bad news for cloud computing: Coghead -- a venture-backed, online application development platform -- is closing, leaving customers with a problem to solve.

It's indeed potentially bad news for Coghead's customers who as early adopters took a risk by choosing to invest in a platform startup in an emerging technology sector. It's hardly indicative of an established trend that somehow predicts "bad news for Cloud Computing" as a whole.

It's a friendly reminder that "whens you rolls da dice, you takes your chances." Prudent and pragmatic risk assessment and relevant business decisions still have to be made when you decide to place your bets on a startup. Just because you move to the Cloud doesn't mean you stop employing pragmatic common sense. I hope these customers have a Plan B.

This is the problem again with lumping all of the *aaS'es into a bucket called Cloud; are we to assume Amazon's AWS (IaaS) and SalesForce.com (SaaS) are going to shutter next week? No, of course not. Will there be others who close their doors and firesale? Most assuredly yes, just like there are in most tech markets.

Here's what Coghead's CEO (in the same article, mind you) explained as the reason for the closure:

Though McNamara said business was continuing to grow rapidly, the recession ultimately did Coghead in, and Coghead began looking for buyers a few months ago. "Faced with the most difficult economy in memory and a challenging fundraising climate, we determined that the SAP deal was the best way forward for the company," McNamara wrote in a letter to customers that went out late Thursday

That's correct kids, even the almighty Cloud, the second coming of computing, is not immune to the pressures of running a business in a tough economy, especially the platform business...

First it was hype around the birth of Cloud and now it's raining epitaphs. I call dibs on Amazon's SAN arrays!

I decided to add my $0.02 because it occurred to me that despite several issues I have with the paper, two things really haven't been appropriately discussed:

The audience for the paper

Expectations of the reader

The goals of the paper were fairly well spelled out and within context of what was written, the authors achieved many of them.

Given that it was described as a "view" of Cloud Computing and not the definitive work on the subject, I think perhaps the baby has been unfairly thrown out with the bath water even when balanced with the "danger" that the general public or press may treat it as gospel.

I think the reason there has been so much frothy reaction to this paper by the "Cloud community" is that because the paper comes from the Electrical Engineering/Computer Science department of UC Berkeley, a certain level of technical depth and a more holistic (dare I say empirical) model for analysis is expected by many readers and their expectations are therefore set a certain way.

Most of the reviews that might be perceived as negative are coming from folks who are reasonably technical, of which I am one.

To that point and that of item #1 above, I don't feel that "we" are the intended audience for this paper and thus, to point #2 above, our expectations -- despite the goals of the paper -- were not met.

That being said, I do have issues with the authors' definition of cloud computing as unnecessarily obtuse, their refusal to discuss the differences between the de facto SPI model and its variants is annoying and short-sighted, and their dismissal of private clouds as relevant is quite disturbing. The notion that Cloud Computing must be "external" to an enterprise and use the Internet as a transport is simply delusional.

Eschewing de facto models of reference because the authors could not agree amongst themselves on the differences between them -- despite consensus in industry outside of academia and even models like the one I've been working on -- comes across as myopic and insulated.

Ultimately I think the biggest miss of the paper was the fact that they did not successfully answer "What is Cloud Computing and how is it different from previous paradigm shifts such as Software as a Service (SaaS)?" In fact, I came away from the paper with the feeling that Cloud Computing is SaaS...

However, I found the coverage of the business drivers, economic issues and the top 10 obstacles to be very good and that people unfamiliar with Cloud Computing would come away with a better understanding -- not necessarily complete -- of the topic.

It was an interesting read that is complimentary to much of the other work going on right now in the field. I think we should treat it as such and move on.

February 18, 2009

I was referenced in a CSO article recently titled "Four Questions On Google App Security." I wasn't interviewed for the story directly, but Bill Brenner simply referenced our prior interviews and my skepticism for virtualization security and cloud Security as a discussion point.

Google's response was interesting and a little tricky given how they immediately set about driving a wedge between virtualization and Cloud. I think I understand why, but if the article featured someone like Amazon, I'm not convinced it would go the same way...

As I understand it, Google doesn't really leverage much in the way of virtualization (from the classical compute/hypervisor perspective) for their "cloud" offerings as compared to Amazon. That may be in large part due to the fact of the differences in models and classification -- Amazon AWS is an IaaS play while GoogleApps is a SaaS offering.

You can see why I made the abstraction layer in the cloud taxonomy/ontology model "optional."

This post dovetails nicely with Lori MacVittie's article today titled "Dynamic Infrastructure: The Cloud Within the Cloud" wherein she highlights how the obfuscation of infrastructure isn't always a good thing. Given my role, what's in that cloudy bubble *does* matter.

So here's my incomplete thought -- a question, really:

How many of you assume that virtualization is an integral part of cloud computing? From your perspective do you assume one includes the other? Should you care?

February 17, 2009

I'm heading out in a few minutes for an all day talk, but I choked on my oatmeal when I read this:

In a CBR article titled "We Can Guarantee Cloud Security" Kristof Kloeckner, IBM's Cloud Computing CTO was quoted at the IBM's Pulse 2009 conference as he tried to "...ease worries over security in the cloud":

Despite all the hype surrounding cloud computing, the issue of security is one debate that will not go away. It is regularly flagged as one of the potential stumbling blocks to widespread cloud adoption.

He said: “We’ve developed some interesting technologies that allow the separation of applications and data on the same infrastructure. We guarantee the security through Tivoli Security and Identity Management and Authentication software, and we also ensure the separation of workloads through the separation of the virtual machines and also the separation of client data in a shared database.”Speaking to CBR after the press conference, Kloeckner went into more detail about IBM’s cloud security offering.

“Security is not essentially any different from securing any kind of open environment; you have to ensure that you know who accesses it and control their rights. We have security software that allows you to manage identities from an organisational model, from whoever is entitled to use a particular service. We can actually ensure that best practices are followed,” Kloeckner said.

Kloeckner added that most people do not realise just how vulnerable they really are. He said: “Most people, unless forced by regulations, usually treat security as a necessary evil. They say it’s very high on their list, but if you really scratch the service, it’s not obvious to me that best practices are followed.”

I wonder if this guarantee is backed up with anything else short of a "sorry" if something bad happens?

This will make for some very interesting discussion when I return today.

Neil sets the stage by suggesting that "established" security vendors who offer solutions for non-virtualized environments simply "...don't get it" when it comes to realizing the shortcomings of their existing solutions in virtualized contexts and that they are "fighting" the encroachment of virtualization on their appliance sales:

Many are clinging to business models based on their overpriced hardware-based solutions and not offering virtualized versions of their solutions. They are afraid of the inevitable disruption (and potential cannibalization) that virtualization will create. However, you and I have real virtualization security needs today and smaller innovative startups have rushed in to fill the gap. And, yes, there are pricing discontinuities. A firewall appliance that costs $25,000 in a physical form can cost $2500 or less in a virtual form from startups like Altor Networks or Reflex Systems.

I'm very interested in which "established" vendors are supposedly clinging to their overpriced hardware-based solutions and avoiding virtualization besides niche players in niche markets that are hardware-bound.

As far as I can tell the top five vendors by revenue in the security space (that sell hardware, not just software) are all actively engaged in both supporting these environments with the limitations that currently exist based on the virtualization platforms today and are very much investing in development of new solutions to work properly in virtual environments given the unique requirements thereof.

Neil is really comparing apples to muffler brackets. He points out in his blog that physical appliances can offer multi-gigabit performance whereas software-based VA's cannot, and yet we're surprised that pricing differentials in orders of magnitude exist? You get what you pay for.

As I pointed out in my Four Horsemen presentation (and is alluded to in the remainder of Neil's post below) EVERY SINGLE VENDOR is currently hamstrung by the same level of integration and architectural limitations involved with the current state of virtual appliance performance in the security space, including those he mentions such as Altor and Reflex. They are all in a holding pattern. I've written about that numerous times.

In fact, as I mentioned in my post titled "Visualization Through Virtualization", the majority of these new-fangled, virtualization-specific "security" tools are actually (now) more focused on visibility, management and change montoring/control than they are pure network-level security because they cannot compete from a performance and scalability perspective with hardware-based solutions.

Here's where I do agree with Neil, based upon what I mention above:

Feature-wise, the security protection services delivered are similar. But, there is a key difference — throughput. What the legacy security vendors forget is that there is still a role for dedicated hardware. There is no way you are going to get full multi-gigabit line speed deep-packet inspection and protocol decode for intrusion prevention from a virtual appliance. A next-generation data center will need both physical and virtualized security controls — ideally, from a vendor that can provide both. I’ll argue that the move to virtualize security controls will grow the overall use of security controls.

So this actually explains the disparity in both approach and pricing that he alluded to above. How does this represent vendors "fighting" virtualization? I see it as hanging on for as long as possible to preserve and milk their investment in the physical appliances Neil says we'll still need while they perform the R&D on their virtualized versions. They can't deploy the new solutions until the platform to support them exists!

The move to virtualize security controls reduces barriers to adoption. Rather than a sprinkle a few physical appliance here and there based on network topology, we can now place controls when and where they are needed, including physical appliances as appropriate. If fact, the legacy vendors have a distinct advantage over virtualization security startups since you prefer a security solution that spans both your physical and virtual environments with consistent management.

Exactly. So again, how is this "fighting" virtualization?

Here's where we ignore reality again:

Over the past six months, I’ve seen signs of life from the legacy physical security vendors. However, some of the legacy physical security vendors have simply taken the code from their physical appliance and moved it into a virtual machine. This is like wrapping a green-screen terminal application with a web front end — it looks better, but the guts haven’t changed. In a data center where workloads move dynamically between physical servers and between data centers, it makes no sense to link security policy to static attributes such as TCP/IP addresses, MAC addresses or servers.

First of all, what we're really talking about in the enterprise space is VMware, since given its market dominance, this is where the sweet spot is for security vendors. This will change over time, but for now, it's VMware.

That being the case, the moment VMsafe was announced/hinted at two years ago, 20+ security vendors -- big and small -- have been diligently working within the constructs of what is made available from VMware to re-engineer their products to take advantage of the API's that will be coming in VMware's upcoming release. This is no small feat. Distributed virtual switching and the two-tier driver architecture with DVfilters means re-engineering your products and approach.

Until VMware's next platform is released, every security vendor -- big or small -- is hamstrung by having to do exactly what Neil says; creating a software instantiation of their hardware products which is integration-limited for the reasons I've already stated. What should vendors do? Firesale their inventories and wait it out?

I ask again: how is this "fighting" virtualization?

The reason there hasn't been a lot of movement is because the entire industry is in a holding pattern. Pretending otherwise is absolutely ridiculous. The obvious exception is Cisco which has invested in and developed substantial solutions such as the Nexus 1000v and VN-Link (which is again awaiting the availability of VMware's next release.)

Security policy in a virtualized environment must be tied to logical identities - like identities of VM workloads, identities of application flows and identities of users. When VMs move, policies need to move. This requires more than a mere port of an existing solution, it requires a new mindset.

Yep. And most of them are adapting their products as best they can. Many companies will follow the natural path of consolidation and wait to buy a startup in this space and integrate it...much like VMware did with BlueLane, for example. Others will look to underlying enablers such as Cisco's VN-Link/Nexus 1000v and chose to integrate at the virtual networking layer there and/or in coordination with VMsafe.

The legacy vendors need to wake up. If they don’t offer robust virtualization security capabilities (and, yes, potentially cannibalize the sales of some of their hardware), another vendor will. With virtualization projects on the top of the list of IT initiatives for 2009, we can’t continue to limp along without protection. It’s time to vote with our wallets and make support of virtual environments a mandatory part of our security product evaluation and selection.

Absolutely! And every vendor -- big and small -- that I've spoken to is absolutely keen on this concept and are actively engaged in developing solutions for these environments with these unique requirements in mind. Keep in mind that VMsafe is about more than just network visibility via the VMM, it also includes disk, memory and CPU...most network-based appliances have never had this sort of access before (since they are NETWORK appliances) and so OF COURSE products will have to be re-tooled.

Overall, I'm very confused by Neil's post as it seems quite contradictory and at odds with what I've personally been briefed on by vendors in the space and overlooks the huge left turns being made by vendors over the last 18 months who have been patiently waiting for VMsafe and other introspection capabilities of the underlying platforms.

Yes, yes. We've talked about this before here. Cisco is introducing a blade chassis that includes compute capabilities (heretofore referred to as a 'blade server.') It also includes networking, storage and virtualization all wrapped up in a tidy bundle.

So while that looks like a blade server (quack!,) walks like a blade server (quack! quack!) that doesn't mean it's going to be positioned, talked about or sold like a blade server (quack! quack! quack!)

What's my point? What Cisco is building is just another building block of virtualized INFRASTRUCTURE. Necessary infrastructure to ensure control and relevance as their customers' networks morph.

My point is that what Cisco is building is the natural by-product of converged technologies with an approach that deserves attention. It *is* unified computing. It's a solution that includes integrated capabilities that otherwise customers would be responsible for piecing together themselves...and that's one of the biggest problems we have with disruptive innovation today: integration.

While the analysts worry about margin erosion and cannibalizing the ecosystem (which is inevitable as a result of both innovation and consolidation,) this is a great move for Cisco, especially when you recognize that if they didn't do this, the internalization of network and storage layers within the virtualization platforms would otherwise cause them to lose relevance beyond dumb plumbing in virtualized and cloud environments.

Also, let us not forget that one of the beauties of having this "end-to-end" solution from a security perspective is the ability to leverage policy across not only the network, but compute and storage realms also. You can whine (and I have) about the quality of the security functionality offered by Cisco, but the coverage you're going to get with centralized policy that has affinity across the datacenter (and beyond,) iis going to be hard to beat.

(There, I said it...OMG, I'm becoming a fanboy!)

And as far as competency as a "server" vendor, c'mon. Firstly, you can't swing a dead cat without hitting a commoditzed PC architecture that Joe's Crab Shack could market as a solution and besides which, that's what ODM's are for. I'm sure we'll see just as much "buy and ally" with the build as part of this process.

What's the difference between a blade chassis with intel line processors and integrated networking and a switch these days? Not much.

So, what Cisco may lose in margin in the "server" sale, they will by far make up with the value people will pay for with converged compute, network, storage, virtualization, management, VN-Link, the Nexus 1000v, security and the integrated one-stop-shopping you'll get. And if folks want to keep buying their HP's and IBM's, they have that choice, too.

It seems that my incomplete thoughts are more popular with folks than the one's I take the time to think all the way through and conclude, so here's the next one...

Here it is:

There is a lot of effort being spent now on attempts to craft standards and definitions in order to provide interfaces which allow discrete Cloud elements and providers to interoperate. Should we not first focus our efforts on ensuring portability between Clouds of our atomic instances (however you wish to define them) and the metastructure* that enables them?

/Hoff

*Within this context I mean 'metastructure' to define not only the infrastructure but all the semantic configuration information and dynamic telemetry needed to support such.

February 11, 2009

I've had some fantastic conversations with folks over the last couple of weeks as we collaborated from the perspective of how a network and security professional might map/model/classify various elements of Cloud Computing.

I just spent several hours with folks at ShmooCon (a security conference) winding through the model with my peers getting excellent feedback.

Prior to that, I've had many people say that the collaboration has yielded a much simpler view on what the Cloud means to them and how to align solutions sets they already have and find gaps with those they don't.

My goal was to share my thinking in a way which helps folks with a similar bent get a grasp on what this means to them. I'm happy with the results.

And then....one day at Cloud Camp...

However, it seems I chose an unfortunate way of describing what I was doing in calling it a taxonomy/ontology, despite what I still feel is a clear definition of these words as they apply to the work.

I say unfortunate because I came across a post by Steve Oberlin, Cassat's Chief Scientist on his "Cloudology" blog titled "Cloud Burst" that resonates with me as the most acerbic, condescending and pompous contributions to nothingness I have read in a long time.

Steve took 9 paragraphs and 7,814 characters to basically say that he doesn't like people using the words taxonomy or ontology to describe efforts to discuss and model Cloud Computing and that we're all idiots and have provided nothing of use.

The most egregiously offensive comment was one of his last points:

I do think some blame (a mild chastisement) is owed to anyone participating in the cloud taxonomy conversation that is not exercising appropriately-high levels of skepticism and insisting on well-defined and valid standards in their frameworks. Taxonomies are thought-shaping tools and bad tools make for bad thinking. One commenter on one of the many blogs echoing/amplifying the taxonomy conversation remarked that some of the diagrams were mere “marketecture” and others warned against special interests warping the framework to suit their own ends. We should all be such critical thinkers.

What exactly in any of my efforts (since I'm not speaking for anyone else) suggests that in collaborating and opening up the discussion for unfettered review and critique, constitutes anything other than high-levels of skepticism? The reason I built the model in the first place was because I didn't feel the others accurately conveyed what was relevant and important from my perspective. I was, gasp!, skeptical.

We definitely don't want to have discussions that might "shape thought." That would be dangerous. Shall we start burning books too?

From the Department of I've Had My Digits Trampled..

So what I extracted from Oberlin's whine is that we are all to be chided because somehow only he possesses the yardstick against which critical thought can be measured? I loved this bit as he reviewed my contribution:

I might find more constructive criticism to offer, but the dearth of description and discussion of what it really means (beyond the blog’s comments, which were apparently truncated by TypePad) make the diagram something of a Rorschach test. Anyone discussing it may be revealing more about themselves than what the concepts suggested by the diagram might actually mean.

Interestingly, over 60 other people have stooped low enough to add their criticism and input without me "directing" their interpretation so as not to be constraining, but again, somehow this is a bad thing.

So after sentencing to death all those poor electrons that go into rendering his rant about how the rest of us are pissing into the wind, what did Oberlin do to actually help clarify Cloud Computing? What wisdom did he impart to set us all straight? How did he contribute to the community effort -- no matter how misdirected we may be -- to make sense of all this madness?

Let me be much more concise than the 7,814 characters Oberlin needed and sum it up in 8:

NOTHING.

So it is with an appropriate level of reciprocity that I thank him for it accordingly.

/Hoff

P.S. Not to be outdone, William Vanbenepe has decided to bestow upon Oberlin a level of credibility not due to his credentials or his conclusions, but because (and I quote) "...[he]just love[s] sites that don't feel the need to use decorative pictures. His doesn't have a single image file which means that even if he didn't have superb credentials (which he does) he'd get my respect by default."

Yup, we bottom feeders who have to resort to images really are only in it for the decoration. Nice, jackass.

Update: The reason for the strikethrough above -- and my public apology here -- is that William contacted me and clarified he was not referring to me and my pretty drawings (my words,) although within context it appeared like he was. I apologize, William and instead of simply deleting it, I am admitting my error, apologizing and hanging it out to dry for all to see. William is not a jackass. As is readily apparent, I am however. ;)

February 09, 2009

This is the first of my "incomplete thought" entries; thoughts too small for a really meaty blog post, but too big for Twitter. OK wiseguy. I know *most* of my thoughts are incomplete, but don't quash my artistic license, mkay?

Here it is:

How many of the cloud providers (IaaS, PaaS) support IPv6 natively or support tunneling without breaking things like NAT and firewalls? As part of all this Infrastruture 2.0 chewy goodness, from a networking (and security) perspective, it's pretty important.

February 03, 2009

A word of unsolicited advice to those of us trying to help "sort out" Cloud Computing -- myself included:

The more times we lead off a description of Cloud Computing as "Confusing," "Over-hyped" and "a Buzzword" then people are going to start to believe us. The press is going to start to believe us. Our customers are going to start to believe us. Pretty soon we won't be able to escape the gravity of our own message.

Granted, we mean well in our cautious and guarded admonishment, but it's starting to wear as thin as those who promote Cloud Computing as the second coming (when we all know full well that is Fiber Channel over Token Ring.)

We don't all have to chant the same mantra and we don't have to preach rainbows and unicorns, but it's important to be accurate and balanced.

I, too, am waiting for the day Cloud Computing will wash my car, bring me a beer and make me a ham sandwich. Until that day, instead of standing around trying to look smart by telling everybody that Cloud Computing is nothing more than hot air, how about making a difference by not playing a game of bad news telephone and add something constructive.

There's value in Cloud Computing so how about we move past the "confusing, over-hyped and buzzword" stage and get to work making it straight-forward, realistic and meaningful instead.