Archive

Last night we saw coverage by Carl Brooks Jo Maitland (sorry, Jo) of an announcement from RackSpace that they were transitioning their IaaS Cloud offerings based on the FOSS Xen platform and moving to the commercially-supported Citrix XenServer instead:

Jaws dropped during the keynote sessions [at Citrix Synergy] when Lew Moorman, chief strategy officer and president of cloud services at Rackspace said his company was moving off Xen and over to XenServer, for better support. Rackspace is the second largest cloud provider after Amazon Web Services. AWS still runs on Xen.

People really shouldn’t be that surprised. What we’re playing witness to is the evolution of the next phase of provider platform selection in Cloud environments.

Many IaaS providers (read: the early-point market leaders) are re-evaluating their choices of primary virtualization platforms and some are actually adding support for multiple offerings in order to cast the widest net and meet specific requirements of their more evolved and demanding customers. Take Terremark, known for their VMware vCloud-based service, who is reportedly now offering services based on Citrix:

Hosting provider Terremark announced a cloud-based compliance service using Citrix technology. “Now we can provide our cloud computing customers even greater levels of compliance at a lower cost,” said Marvin Wheeler, chief strategy officer at Terremark, in a statement.

Demand for services will drive hypervisor-specific platform choices on the part of provider with networking and security really driving many of those opportunities. IaaS Providers who offer bare-metal boot infrastructure that allows flexibility of multiple operating environments (read: hypervisors) will start to make in-roads. This isn’t a mass-market provider’s game, but it’s also not a niche if you consider the enterprise as a target market.

Specifically, the constraints associated with networking and security (via the hypervisor) limit the very flexibility and agility associated with what IaaS/PaaS clouds are designed to provide. What many developers, security and enterprise architects want is the ability to replicate more flexible enterprise virtualized networking (such as multiple interfaces/IP’s) and security capabilities (such as APIs) in Public Cloud environments.

Support of specific virtualization platforms can enable these capabilities whether they are open or proprietary (think Open vSwitch versus Cisco Nexus 1000v, for instance.) In fact, Citrix just announced a partnership with McAfee to address integrated security between the ecosystem and native hypervisor capabilities. See Simon Crosby’s announcement here titled “Taming the Four Horsemen of the Virtualization Security Apocalypse” (it’s got a nice title, too 😉

To that point, are some comments I made on Twitter that describe these points at a high level:

We’re told repetitively by Software as a Service (SaaS)* vendors that infrastructure is irrelevant, that CapEx spending is for fools and that Cloud Computing has fundamentally changed the way we will, forever, consume computing resources.

Why is it then that many of the largest SaaS providers on the planet (including firms like Salesforce.com, Twitter, Facebook, etc.) continue to build their software and choose to run it in their own datacenters on their own infrastructure? In fact, many of them are on a tear involving multi-hundred million dollar (read: infrastructure) private datacenter build-outs.

I mean, SaaS is all about the software and service delivery, right? IaaS/PaaS is the perfect vehicle for the delivery of scaleable software, right? So why do you continue to try to convince *us* to move our software to you and yet *you* don’t/won’t/can’t move your software to someone else like AWS?

Hypocricloud: SaaS firms telling us we’re backwards for investing in infrastructure when they don’t eat the dog food they’re dispensing (AKA we’ll build private clouds and operate them, but tell you they’re a bad idea, in order to provide public cloud offerings to you…)

Quid pro quo, agent Starling.

/Hoff

* I originally addressed this to Salesforce.com via Twitter in response to Peter Coffee’s blog here but repurposed the title to apply to SaaS vendors in general.

This is bound to be an unpopular viewpoint. I’ve struggled with how to write it because I want to inspire discussion not a religious battle. It has been hard to keep it an incomplete thought. I’m not sure I have succeeded 😉

I’d like you to understand that I come at this from the perspective of someone who talks to providers of service (Cloud and otherwise) and large enterprises every day. Take that with a grain of whatever you enjoy ingesting. I have also read some really interesting viewpoints contrary to mine, many of which I find really fascinating, just not subscribed to my current interpretation of reality.

Here’s the deal…

While our attention has turned to the wonders of Cloud Computing — specifically the elastic, abstracted and agile delivery of applications and the content they traffic in — an interesting thing occurs to me related to the relevancy of networking in a cloudy world:

All this talk of how Cloud Computing commoditizes “infrastructure” and challenges the need for big iron solutions, really speaks to compute, perhaps even storage, but doesn’t hold true for networking.

The evolution of these elements run on different curves.

Networking ultimately is responsible for carting bits in and out of compute/storage stacks. This need continues to reliably intensify (beyond linear) as compute scale and densities increase. You’re not going to be able to satisfy that need by trying to play packet ping-pong and implement networking in software only on the same devices your apps and content execute on.

As (public) Cloud providers focus on scale/elasticity as their primary disruptive capability in the compute realm, there is an underlying assumption that the networking that powers it is magically and equally as scaleable and that you can just replicate everything you do in big iron networking and security hardware and replace it one-for-one with software in the compute stacks.

The problem is that it isn’t and you can’t.

Cloud providers are already hamstrung by how they can offer rich networking and security options in their platforms given architectural decisions they made at launch – usually the pieces of architecture that provide for I/O and networking (such as the hypervisor in IaaS offerings.) There is very real pain and strain occurring in these networks. In Cloud IaaS solutions, the very underpinnings of the network will be the differentiation between competitors. It already is today.

With the enormous I/O requirements of virtualized infrastructure, the massive bandwidth requirements that rich applications, video and mobility are starting to place on connectivity, Cloud providers, ISPs, telcos, last mile operators, and enterprises are pleading for multi-terabit switching fabrics in their datacenters to deal with load *today.*

I was reminded of this today, once again, by the announcement of a 322 Terabit per second switch. Some people shrugged. Generally these are people who outwardly do not market that they are concerned with moving enormous amounts of data and abstract away much of the connectivity that is masked by what a credit card and web browser provide. Those that didn’t shrug are those providers who target a different kind of consumer of service.

Abstraction has become a distraction.

Raw networking horsepower, especially for those who need to move huge amounts of data between all those hyper-connected cores running hundreds of thousands of VM’s or processes, still know it as a huge need.

Before you simply think I’m being a shill because I work for networking vendor (and the one that just announced that big switch referenced above,) please check out the relevant writings on this viewpoint which I have held for years which is that we need *both* hardware and software based networking to scale efficiently and the latter simply won’t replace the former.

Virtualization and Cloud exacerbate the network-centric issues we’ve had for years.

I look forward to the pointers to the sustainable, supportable and scaleable 322 Tb/s software-based networking solutions I can download and implement today as a virtual appliance.

It’s very interesting to see that now that infrastructure-as-a-service (IaaS) players like Amazon Web Services are clawing their way “up the stack” and adding more platform-as-a-service (PaaS) capabilities, that Microsoft is going “down stack” and providing IaaS capabilities by way of adding RDP and VM capabilities to Azure.

Microsoft is expected to add support for Remote Desktops and virtual machines (VMs) to Windows Azure by the end of March, and the company also says that prices for Azure, now a baseline $0.12 per hour, will be subject to change every so often.

Prashant Ketkar, marketing director for Azure, said that the service would be adding Remote Desktop capabilities as soon as possible, as well as the ability to load and run virtual machine images directly on the platform. Ketkar did not give a date for the new features, but said they were the two most requested items.
…

This move begins a definite trend away from the original concept for Azure in design and execution. It was originally thought of as a programming platform only: developers would write code directly into Azure, creating applications without even being aware of the underlying operating system or virtual instances. It will now become much closer in spirit to Amazon Web Services, where users control their machines directly. Microsoft still expects Azure customers to code for the platform and not always want hands on control, but it is bowing to pressure to cede control to users at deeper and deeper levels.

One major reason for the shift is that there are vast arrays of legacy Windows applications users expect to be able to run on a Windows platform, and Microsoft doesn’t want to lose potential customers because they can’t run applications they’ve already invested in on Azure. While some users will want to start fresh, most see cloud as a way to extend what they have, not discard it.

This sets the path to allow those enterprise customers running HyperV internally to take those VMs and run them on (or in conjunction with) Azure.

Besides the obvious competition with AWS in the public cloud space, there’s also a private cloud element. As it stands now, one of the primary differentiators for VMware from the private-to-public cloud migration/portability/interoperability perspective is the concept that if you run vSphere in your enterprise, you can take the same VMs without modification and move them to a service provider who runs vCloud (based on vSphere.)

Sitting in an impressive room at the Google campus in Mountain View last month, I asked the collective group of brainpower a slightly rhetorical question:

How much longer do you feel pure-play Infrastructure-As-A-Service will be a relevant service model within the spectrum of cloud services?

I couched the question with previous “incomplete thoughts*” relating to the move “up-stack” by IaaS providers — providing value-added, at-cost services to both differentiate and soften the market for what I call the “PaaSification” of the consumer. I also highlighted the move “down-stack” by SaaS vendors building out platforms to support a broader ecosystem and value proposition.

In the long term, I think ultimately the trichotomy of the SPI model will dissolve thanks to commoditization and the need for providers to differentiate — even at mass scale. We’ll ultimately just talk about service delivery and the platform(s) used to deliver them. Infrastructure will enable these services, of course, but that’s not where the money will come from.

Just look at the approach of providers such as Amazon, Terremark and Savvis and how they are already clawing their way up the PaaS stack, adding more features and functions that either equalize public cloud capabilities with those of the enterprise or even differentiate from it. Look at Microsoft’s Azure. How about Heroku, Engine Yard, Joyent? How about VMware and Springsource? All platform plays. Develop, click, deploy.

As I mention in my Cloudifornication presentation, I think that from a security perspective, PaaS offers the potential of eliminating entire classes of vulnerabilities in the application development lifecycle by enforcing sanitary programmatic practices across the derivate works built upon them. I look forward also to APIs and standards that allow for consistency across providers. I think PaaS has the greatest potential to deliver this.

There are clearly trade-offs here, but as we start to move toward the two key differentiators (at least for public clouds) — management and security — I think the value of PaaS will really start to shine.

Probably just another bout of obviousness, but if I were placing bets, this is where I’d sink my nickels.

Yesterday I had a lively discussion with Lori MacVittie about the notion of what she described as “edge” service placement of network-based WebApp firewalls in Cloud deployments. I was curious about the notion of where the “edge” is in Cloud, but assuming it’s at the provider’s connection to the Internet as was suggested by Lori, this brought up the arguments in the post
above: how does one roll out compensating controls in Cloud?

The level of difficulty and need to integrate controls (or any “infrastructure” enhancement) definitely depends upon the Cloud delivery model (SaaS, PaaS, and IaaS) chosen and the business problem trying to be solved; SaaS offers the least amount of extensibility from the perspective of deploying controls (you don’t generally have any access to do so) whilst IaaS allows a lot of freedom at the guest level. PaaS is somewhere in the middle. None of the models are especially friendly to integrating network-based controls not otherwise supplied by the provider due to what should be pretty obvious reasons — the network is abstracted.

So here’s the rub, if MSSP’s/ISP’s/ASP’s-cum-Cloud operators want to woo mature enterprise customers to use their services, they are leaving money on the table and not fulfilling customer needs by failing to roll out complimentary security capabilities which lessen the compliance and security burdens of their prospective customers.

While many provide commoditized solutions such as anti-spam and anti-virus capabilities, more complex (but profoundly important) security services such as DLP (data loss/leakage prevention,) WAF, Intrusion Detection and Prevention (IDP,) XML Security, Application Delivery Controllers, VPN’s, etc. should also be considered for roadmaps by these suppliers.

Think about it, if the chief concern in Cloud environments is security around multi-tenancy and isolation, giving customers more comfort besides “trust us” has to be a good thing. If I knew where and by whom my data is being accessed or used, I would feel more comfortable.

Yes, it’s difficult to do properly and in many cases means the Cloud provider has to make a substantial investment in delivery platforms and management/support integration to get there. This is why niche players who target specific verticals (especially those heavily regulated) will ultimately have the upper hand in some of these scenarios – it’s not socialist security where “good enough” is spread around evenly. Services like these need to be configurable (SELF-SERVICE!) by the consumer.

An example? How about Google: where’s DLP integrated into the messaging/apps platforms? Amazon AWS: where’s IDP integrated into the VMM for introspection?

I wrote a couple of interesting posts about this (that may show up in the automated related posts lists below):

My customers in the Fortune 500 complain constantly that the biggest providers they are being pressured to consider for Cloud services aren’t listening to these requests — or aren’t in a position to respond.

That’s bad for everyone.

So how about it? Are services like DLP, IDP, WAF integrated into your Cloud providers’ offerings something you’d like to see rather than having to add additional providers as brokers and add complexity and cost back into Cloud?

Update 2/1/10: The A6 effort is in full-swing. You can find out more about it at the Google Groupshere.

A few weeks ago I penned a blog discussing an idea I presented at a recent Public Sector Cloud gathering that later inherited the name “Audit, Assertion, Assessment, and Assurance API (A6)”

The case for A6 is straightforward:

…take the capabilities of something like SCAP and embed a standardized and open API layer into each IaaS, PaaS and SaaS offering [Ed: At the API layer of each deployment model] to provide not only a standardized way of scanning for network vulnerabilities, but also configuration management, asset management, patch remediation, compliance, etc.

This way you win two ways: automated audit and security management capability for the customer/consumer and a a streamlined, cost effective, and responsive way of automating the validation of said controls in relation to compliance, SLA and legal requirements for service providers.

Much discussion ensued on Twitter and via email/blogs explaining A6 in better detail and with more specificity.

The idea has since grown legs and I’ve started to have some serious discussions with “people” (*wink wink*) who are very interested in making this a reality, especially in light of business and technical use cases bubbling to the surface of late.

To that end, Ben (@ironfog) has taken the conceptual mumblings and begun work on a RESTful interface for A6. You can find the draft documentation here. You can find his blog and awesome work on making A6 a reality here. Thank you so much, Ben.

NOTE: The documentation/definitions below are conceptual and stale. I’ve left them here because they are important and relevant but are likely not representative of the final work product.

I’m thinking of pulling together a more formalized working group for A6 and push hard with some of those “people” above to get better definition around its operational realities as well as understand the best way to create an open and extensible standard going forward.

We talk a lot about the utility of Public Clouds to enable the cost-effective and scalable implementation of “server” functionality, whether that’s SaaS, PaaS, or IaaS model, the concept is pretty well understood: use someone else’s infratructure to host your applications and information.

As it relates to the desktop/client side of Cloud, we normally think about hosting the desktop/client capabilities as a function of Private Cloud capabilities; behind the firewall. Whether we’re talking about terminal service-like capabilities and VDI, it seems to me people continue to think of this as a predominantly “internal” opportunity.

I don’t think people are talking enough about the client side of Cloud and desktop as a service (DaaS) and what this means:

If the physical access methods continue to get skinnier (smart phones, thin clients, client hypervisors, virtual machines, etc.) is there an opportunity for providers of Infrastructure as a Service to host desktop instances outside a corporate firewall? If I can take advantage of all of the evolving technology in the space and couple it with the same sorts of policy advancements, networking and VPN functionality to connect me to IaaS server resources running in Private or Public Clouds, isn’t that a huge opportunity for further cost savings, distributed availability and potentially better security?

There are companies such as Desktone looking to do this very thing in a way to offset the costs of VDI and further the efforts of consolidation. It makes a lot of sense for lots of reasons and despite my lack of hands-on exposure to the technology, it sure looks like we have the technical capability to do this today. Dana Gardner wrote about this back in 2007 and it’s as valid a set of points then as it is now — albeit with a much bigger uptake in Cloud:

The stars and planets finally appear to be aligning in a way that makes utility-oriented delivery of a full slate of client-side computing and resources an alternative worth serious consideration. As more organizations are set up as service bureaus — due to such IT industry developments as ITIL and shared services — the advent of off the wire everything seems more likely in many more places

I could totally see how Amazon could offer the same sorts of workstation utility as they do for server instances.