Category Archives: Products

VMware today announced VMware Integrated OpenStack (VIO) 5.0. We are truly excited about our latest OpenStack distribution as VMware is one of the first companies to support and provide enhanced stability on top of the newest OpenStack Queens Release. Available in both Carrier and Data Center Editions, VIO 5.0 enables customers to take advantage of advancements in Queens to support mission-critical workloads, and adds support for the latest versions of VMware products including vSphere, vSAN, and NSX.

For our Telco/NFV customers, VIO 5.0 is about delivering scale and availability for hybrid applications across VM and container-based workloads using a single VIM (Virtual Infrastructure Manager). Also for NFV operators, VIO 5.0 will help fast track a path towards Edge computing with VIO-in-a-box, secure multi-tenant isolation and accelerated network performance using the enhanced NSX-T VDS (N-VDS). For VIO Datacenter customers, advanced security, simplified user experience, and advanced networking with DNSaaS have been on top of the wish list for many VIO customers. We are super excited to bring those features in VIO 5.0.

Heterogeneous Cluster using Node Group: Now you can have different types of worker nodes in the same cluster. Extending the cluster node profiles feature introduced in VIO 4.1, a cluster can now have multiple node groups, each mapping to a single node profile. Instead of building isolated special purpose Kubernetes clusters, a cloud admin can introduce a new node group(s) to accommodate heterogeneous applications such as machine learning, artificial intelligence, and video encoding. If resource usage exceeds the node group limit, VIO 5.0 supports cluster scaling at a node group level. With node groups, cloud admins can address cluster capacity based on application requirements, allowing the most efficient use of available resources.

Enhanced Cluster Manageability: vkube heal and ssh allow you to directly ssh into any of the nodes of a given cluster and to recover a failed cluster nodes based on ETCD state or cluster backup in the case of complete failure.

Advanced Networking:

N-VDS: Also Known as NSX-T VDS in Enhanced Data-path mode. Enhanced, because N-VDS runs in DPDK mode and allows containers and VMs to achieve significant improvements in response time, reduced network latencies and breakthrough network performance. With performance(s) similar to SR-IOV, while maintaining the operational simplicity of virtualized NICs, NFV customers can have their cake and eat it too

NSX-V Search domain: A new configuration setting in the NSX-V will enable the admin to configure a global search domain. Tenants will use this search domain if there is no other search domain set on the subnet.

NSX-T availability zone (AZ): An availability zone is used to make network resources highly available by group network nodes that run services like DHCP, L3, NAT, and others. Users can associate applications with an availability zone for high availability. In previous releases neutron AZ was supported against NSX-V, we are extending this support to the T as well.

Security and Metering:

Keystone Federation: Federated Identity provides a way to securely use existing credentials to access cloud resources such as servers, volumes, and databases, across multiple endpoints across multiple authorized clouds using a single set of credentials. VIO5 supports Keystone to Keystone (K2K) federation by designating a central Keystone instance as an Identity Provider (IdP), interfacing with LDAP or an upstream SAML2 IdP. Remote Keystone endpoints are configured as Service Providers (SP), propagating authentication requests to the central Keystone. As part of Keystone Federation enhancement, we will also support 3rd party IdP in addition to the existing support for vIDM.

Gnocchi: Gnocchi is the project name of a TDBaaS (Time Series Database as a Service) project that was initially created under the Ceilometer umbrella. Rather than storing raw data points, it aggregates them before storing them. Because Gnocchi computes all the aggregations at ingestion, data retrieval is exceptionally speedy. Gnocchi resolves performance bottlenecks in Ceilometer’s legacy architecture by providing an extremely robust foundation for the metric storage required for billing and monitoring. The legacy Ceilometer API service has been deprecated by upstream and is no longer available in Queens. Instead, the Ceilometer API and functionality has been broken out into the Aodh, Panko, and Gnocchi services, all of which are fully supported in VIO 5.0.

Default Drop Policy: Enable this feature to ensure that traffic to a port that has no security groups and has port security enabled will always discard.

End to End Encryption: The cloud admin now has the option to enable API encryption for internal API calls in addition to the existing encryption on public OpenStack endpoints. When enabled, all internal OpenStack API calls will be sent over HTTPS using strong TLS 1.2 encryption. Encryption on internal endpoints helps avoid man-in-the-middle attacks if the management network is compromised.

Performance and Manageability:

VIO-in-a-box: Also known as the “Tiny” deployment. Instead of separate physical clusters for management and compute, VMware Integrated OpenStack control and data plane can now consolidate on a single physical server. This drastically reduces the footprint of a deployment and is ideal for Edge Computing scenarios where power and space is a concern. VIO-in-a-box can be preconfigured manually or fully automated with OMS API.

Hardware Acceleration: GPUs are synonymous with artificial intelligence and machine learning. vGPU support gives OpenStack operators the same benefits for graphics-intensive workloads as traditional enterprise applications: specifically resource consolidation, increased utilization, and simplified automation. The video RAM on the GPU is carved up into portions. Multiple VM instances can be scheduled to access available vGPUs. Cloud admins determine the amount of vGPU each VM can access based on VM flavors. There are various ways to carve vGPU resources. Refer to the NVIDIA GRID vGPU user guide for additional detail on this topic.

OpenStack at Scale: VMware Integrated OpenStack 5.0 features improved scale, having been tested and validated to run 500 hosts and 15,000 VMs in a region. This release will also introduce support for multiple regions at once as well as monitoring and metrics at scale.

Elastic TvDC: A Tenant Virtual Datacenter (TvDC) can extend across multiple clusters in VIO 5.0. Extending on support of single cluster TvDC’s introduced in VIO 4.0, VIO 5.0 allows a TvDC to span across multiple clusters. Cloud admins can create several resource pools across multiple clusters assigning the same name, project-id, and unique provider-id. When tenants launch a new instance, the OpenStack scheduler and placement engine will schedule VM request to any of the resource pools mapped to the TvDC.

VMware at OpenStack Summit 2018:

VMware is a Premier Sponsor of OpenStack Summit 2018 which runs May 21-24 at the Vancouver Convention Centre in Vancouver, BC, Canada. If you are attending the Summit in person, we invite you to stopped by VMware’s booth (located at A16) for feature demonstrations of VMware Integrated OpenStack 5 as well as VMware NSX and VMware vCloud NFV. Hands on training is also available (RSVP required). Complete schedule of VMware breakout sessions, lightening talks and training presentations can be found here.

VMware announced general availability (GA) of VMware Integrated OpenStack (VIO) 4.1 on Jan 18th, 2018. We are truly excited about our latest OpenStack distribution that gives our customers enhanced stability on top of the Ocata release and support for the latest versions of VMware products, across vSphere, vSAN, and NSX V|T (including NSX-T LBaaSv2). For OpenStack Cloud Admins, the 4.1 release is also about enhanced control. Control over API throughput, virtual machine bandwidth (QoS), deployment form factor, and user management across multiple LDAP domains. For Kubernetes Admins, 4.1 is about enhanced tooling. Tooling that enables Control plane backup and recovery, integration with Helm and Heapster allowing for simplified application deployment and monitoring, and centralized log forwarding. Finally, VIO deployment automation has never been more straightforward using newly documented OMS API.

Public OMS API – Management server APIs that can be used to automate deployment and lifecycle management of VMware Integrated OpenStack is available for general consumption. Users can perform tasks such as provision OpenStack cluster, start/stop the cluster, gather support bundles, etc using the OMS public API. Users can also leverage Swagger UI to check and validate API availability and specs,

Neutron QoS – Before VIO 4.1, Nova image or flavor extra-spec controlled network QoS against the vCenter VDS. With VIO 4.1, Cloud administrator can leverage Neutron QoS to create the QoS profile and map to a port(s) or logical switch. Any virtual machine associated with the port or logical switch will inherit the predefined bandwidth policy.

Native NSX-T Load Balancer as a Service (LBaaS) – Before VIO 4.1, NSX-T customers had to implement BYO Nginx or third party LB for application load balancing. With VIO 4.1, NSX-T LBaaSv2 can be provisioned using both Horizon or Neutron LBaaS API. Each load balancer must map to an NSX-T Tier 1 logical router (LR). Missing LR or LR without a valid uplink is not a supported topology.

Multiple domain LDAP backend – VMware Integrated OpenStack 4.1 supports SQL plus one or more domains as an identity source. Up to a maximum of 10 domains, each domain can belong to a different authentication backend. Cloud administrators can create/update/delete domains and grant / revoke Domain administrator users. Domain administrator is a local administrator, delegated to manage resources such as user, quotas, and projects for a specific domain. VIO 4.1 Support both AD and OpenDirectory as authentication backends.

4.1 NFV and Kubernetes Features:

VIO-in-a-box – AKA Tiny deployment. Instead of separate physical clusters for management and compute, VIO deployment can now consolidate on a single physical server. VIO-in-a-box drastically reduces the footprint and is suitable for environments which do not have high availability requirements nor large workloads. VIO-in-a-box can be preconfigured manually or fully automated with OMS API. Shipped as a single RU appliance to any manned or unmanned Data Center where space, capacity, availability of onsite support are biggest concerns.

VM Import – Further expanding on VM import capabilities, you can now import vSphere VM with multiple disks and NICs. Any VMDK not classified as VM root disk imports as cinder-volume(s). Existing networks import as provider network with access restricted only to the given tenant. Ability to import vSphere VM workloads into OpenStack and run critical Day 2 operations against them via OpenStack APIs is the foundation we are setting for future sophisticated use cases around availability. Refer to here for VM import instructions.

Networking passthrough – Traditionally Nova flavor or image extra-specs defined the workflow for hardware passthrough, without direct involvement of Neutron. VIO 4.1 introduces Neutron-based network passthrough device configuration. The Neutron based approach allows a Cloud administrators to control and manage network settings such as MAC, IP, and QoS of a passthrough network device. Although both options will continue to be available, going forward commendation is to leverage the neutron workflow for network and nova extra-specs for all other hardware passthrough devices. Refer to Upstream and VMware documentation for details.

Enhanced Kubernetes support – VIO 4.1 ships with Kubernetes version 1.8.1. In addition to the latest upstream release, integration with widely adopted application deployment and monitoring tools are standard out of the box, Helm and Heapster. VIO 4.1 with NSX-T2.1.will allow you to consume Kubernetes network security policy as well.

VIO Kubernetes support bundle – Opening support tickets couldn’t be simpler with VIOK support bundle. Using a single line command, specify the start and end date, VIO Kubernetes will capture logs from all components required to diagnosis tenant impacting issues within the specified time range.

VIO Kubernetes Log Insight integration – Cloud administrator can specify FQDN of the log Insight as the logging server. Current release supports a single logging server.

In my previous blog I wrote about VMware’s involvement in open source. The proliferation of open source projects in recent years has influenced how people think about technology, and how technology is being adopted in organizations, for a few reasons. First, open source is more accessible – developers can download projects from github to their laptops and quickly start using them. Second, open source delivers cutting edge capabilities, and companies leverage that to increase the pace of innovation. Third, developers love the idea that they can influence, customize and fix the code of the tools they’re using. Many companies are now adopting the “open source first” strategy with the hope that they will not only speed up innovation but also cut costs, as open source is free.

However, while developers increasingly adopt open source, it often doesn’t come easy to DevOps and IT teams, who carry the heavy burden of bringing applications from developer laptop to production. These teams got to think about stability, performance, security, upgrades, patching and the list goes on. In those cases, enterprises are often happy to pay for an enterprise-grade version of the product, for which all those things mentioned are already taken care of.

When applications are ready to move to production…

OpenStack is a great example. Many organizations are keen to run their applications on top of an open source platform, also known to be the industry standard. But that doesn’t come without deployment and manageability challenges. That’s where VMware provides more value to customers.

VMware Integrated OpenStack (VIO) makes it easier for IT to deploy and run an OpenStack cloud on top of their existing VMware infrastructure. Combining VIO with the enterprise-grade capabilities of the VMware stack provides customers with the most reliable and production ready OpenStack solution. There are three key reasons for this statement: a) VMware provides best-of-breed, production ready OpenStack-compatible infrastructure; b) VIO is fully tested for both – business continuity and compatibility; and c) VMware delivers capabilities for day 2 operations. Let me go into details for each of the three.

Best-of-breed OpenStack-compatible infrastructure

First, VMware Integrated OpenStack is optimized to run on top of VMware Software Defined Data Center (SDDC), leveraging all the enterprise-grade capabilities of VMware technologies such as high availability, scalability, security and so on.

VMware NSX for Neutron: advanced networking services with massive scale and throughput, and with rich set of capabilities such as private networks, floating IPs, logical routing, load balancing, security groups and micro-segmentation.

VMware vSAN/3rd party storage for Cinder/Glance: VIO works with any vSphere-validated storage (we have the largest hardware compatibility list in the industry). VIO also brings Advanced Storage Policies through VMware vSAN.

Battle hardened and tested

OpenStack can be deployed on many combinations of storage, network, and compute hardware and software, and from multiple vendors. Testing all combinations is a challenge and often times customers who choose the DIY route will have to test their combination of hardware and software for production workloads. VMware Integrated OpenStack, on the other hand, is battle-hardened and tested against all VMware virtualization technologies to ensure the best possible user experience from deployment to management (upgrades, patching, etc.) to usage. In addition, VMware provides the broadest hardware compatibility coverage in the industry today (that has been tested in production environments).

Finally, to add to all of this, another benefit is that our customers only have one vendor and support number to call to in case of a problem. No finger pointing, no need to handle different support plans. Easy!

VMware Integrated OpenStack (VIO) is an OpenStack distribution supported by VMware, optimized to run on top of VMware’s SDDC infrastructure. In the past few months we have been hard at work, adding additional enterprise grade capabilities into VIO, making it even more robust, scalable and secure, yet keeping it easy to deploy, operate and use.

VMware Integrated OpenStack 4.0 is based on Ocata, and some of the highlights include:

– Containers support – users can run VMs alongside containers on VIO. Out-of-the-box container support enables developers to consume Kubernetes APIs, leveraging all the enterprise grade capabilities of VIO such as multi-tenancy, persistent volumes, high availability (HA), and so on.

– Integration with vRealize Automation – vRealize Automation customers can now embed OpenStack components in blueprints. They can also manage their OpenStack deployments through the Horizon UI as a tab in vRealize Automation. This integration provides additional governance as well as single-sign-on for users.

– Multi vCenter support – customers can manage multiple VMware vCenters with a single VIO deployment, for additional scale and isolation.

– Additional capabilities for better performance and scale, such as live resize of VMs (changing RAM, CPU and disk without shutting down the VM), Firewall as a Service (FWaaS), CPU pinning and more.

Our customers use VMware Integrated OpenStack for a variety of use cases, including:

Developer cloud – providing public cloud-like user experience to developers, as well as more choice of consumption (Web UI, CLI or API), self-service and programmable access to VMware infrastructure. With the new container management support, developers will be able to consume Kubernetes APIs.

Our customers tell us (consistently) that VIO is easy to deploy (“it just worked!”) and manage. Since it’s deployed on top of VMware virtualization technologies, they are able to deploy and manage it by themselves, without hiring new people or professional services. Their development and DevOps teams like VIO because it gives them the agility and user experience they want, with self-service and standard OpenStack APIs.

In most cases, in a short amount of time (few weeks!) customers trust VIO enough to run their business-critical applications, such as e-commerce website or online travel system, in production.

VMware Integrated OpenStack will be available as a standalone product later this quarter. For more information go to our website, check out the product walkthrough and try out the hands-on lab.

If you are attending VMworld, please stop by our booth (#1139) to see demos and speak with OpenStack specialists. We’re looking forward to seeing you!

VMware announced general availability (GA) of VMware Integrated OpenStack 3.1 on Feb 21 2017. We are truly excited about our latest OpenStack distribution that gives our customers enhanced stability on top of the Mitaka release and streamlined user experience with Single Sign-On support with VMware Identity Manager. For OpenStack Cloud Admins, the 3.1 release is also about enhanced integrations that allows Cloud Admins to further take advantage of the battle tested vSphere Infrastructure & Operations tooling providing enhanced security, OpenStack API performance monitoring, brownfield workload migration, and seamless upgrade between central and distributed OpenStack management control planes.

NSX Policy Support in Neutron. NSX administrators can define security policies, shared by the OpenStack Cloud Admin with cloud users. Users can either create their own rules, bounded with the predefined ones that can’t be overridden, or only use the predefined, depending on the policy set by the OpenStack Cloud Admin. NSX Provider policy feature allows Infrastructure Admins to enable enhanced security insertion and assurance all workloads are developed and deployed based on standard IT security policies.

New NFV Features. Further expanding on top of VIO 3.0 capability to leverage existing workloads in your OpenStack cloud, you can now import vSphere VMs with NSX network backing into VMware Integrated OpenStack. The ability to import vSphere VM workloads into OpenStack and run critical Day 2 operations against them via OpenStack APIs enables you to quickly move existing development projects or production workloads to the OpenStack Framework. VM Import steps can be found here. In addition full passthrough support by using VMware DirectPath I/O is supported.

Seamless update from compact mode to HA mode. If you are updating from VMware Integrated OpenStack 3.0 that is deployed in compact mode to 3.1, you can seamlessly transition to an HA deployment during the update. Upgrade docs can be found here.

Single Sign-On integration with VMware Identity Manager. You can now streamline authentication for your OpenStack deployment by integrating it with VMware Identity Manager. SSO integration steps can be found here.

If you’re ready to try VIO, take it for a spin with the Hands on Labs that illustrates a step-by-step walkthrough of how to deploy OpenStack in Compact Management Mode in under fifteen minutes.

Deploying OpenStack challenges even the most seasoned, skilled IT organizations. With integrations, configurations, testing, re-testing, stress testing and more. For many, deploying OpenStack appears as an IT ‘Science Project’, wherein the light at the end of the tunnel dims with each passing month.

VMware Integrated OpenStack takes a different approach, reducing the redundancies and confusion of deploying OpenStack with the new Compact Management Control Pane. In the Compact Mode UI, wait minutes, not months. Enterprises seeking to evaluate OpenStack or those that are ready to build OpenStack clouds in the most cost efficient manner now have the ability to deploy in as little as 15 minutes quickly.

The architecture for VMware Integrated OpenStack is optimized to support compact architecture mode, reducing the need for support, overall resource costs, and the operational complexity keeping an enterprise from completing their OpenStack adoption.

The most recent update to VMware Integrated OpenStack focuses on the ease of use and the immense benefit to administrators – access and integration to the VMware ecosystem. The seamless integration of the family of VMware products allows the administrators to leverage their current VMware products to enhance their OpenStack, in combination with the ability to manage workloads through developer friendly OpenStack APIs.

The most recent update to VMware Integrated OpenStack focuses on the ease of use and the immense benefit to administrators – access and integration to the VMware ecosystem. The seamless integration of the family of VMware products allows the administrators to leverage their current VMware products to enhance their OpenStack, in combination with the ability to manage workloads through developer friendly OpenStack APIs.

OpenStack doesn’t mandate defaults for compute, network and storage, which frees you to select the best technology. For many VMware customers, the best choice will be vSphere to provide OpenStack Nova compute capabilities.

It is commonly asserted that KVM is the only hypervisor to use in an OpenStack deployment. Yet every significant commercial OpenStack distro supports vSphere. The reasons for this broad support are clear.

Costs for commercial KVM are comparable to vSphere. In addition, vSphere has tremendous added benefits: widely available and knowledgeable staff, vastly simplified operations, and proven lifecycle management that can keep up with OpenStack’s rapid release cadence.

Let’s talk first about cost. Traditional, commercial KVM has a yearly recurring support subscription price. Red Hat OpenStack Platform-Standard 2 sockets can be found online at $11,611/year making the 3 year cost around $34,833[i]. VMware vSphere with Operations Management Enterprise Plus (multiplied by 2 to match Red Hat’s socket pair pricing) for 3 years, plus the $200/CPU/year VMware Integrated OpenStack SnS is $14,863[ii]. Even when a customer uses vCloud Suite Advanced, costs are on par with Red Hat. (Red Hat has often compared prices using VMware’s vCloud Suite Enterprise license to exaggerate cost differences.)

When 451 Research[iii] compared distro costs based on a “basket” of total costs in 2015 they found that commercial distros had a cost that was close to regular virtualization. And if VMware Integrated OpenStack (VIO) is the point of comparison, the costs would likely be even closer. The net-net is that cost turns out not to be a significant differentiator when it comes to commercial KVM compared with vSphere. This brings us to the significant technical and operational benefits vSphere brings to an OpenStack deployment.

In the beginning, it was assumed that OpenStack apps would build in the resiliency that used to be assumed from a vSphere environment, thus allowing vSphere to be removed. As the OpenStack project has matured, capabilities such as VMware vMotion and DRS (Distributed Resource Scheduler) have risen in importance to end users. Regardless of the application the stability and reliability of the underlying infrastructure matters.

There are two sets of reasons to adopt OpenStack on vSphere.

First, you can use VIO to quickly (minutes or hours instead of days or weeks) build a production-grade, operational OpenStack environment with the IT staff you already have, leveraging the battle-tested infrastructure your staff already knows and relies on. No other distro uses a rigorously tested combination of best-in-class compute (vSphere Ent+ for Nova), network (NSX for Neutron), and storage (VSAN for Cinder).

Second, only VMware, a long-time (since 2012), active (consistently a top 10 code contributor) OpenStack community member provides BOTH the best underlying infrastructure components AND the ongoing automation and operational tools needed to successfully manage OpenStack in production.

In many cases, it all adds up to vSphere being the best choice for production OpenStack.

Senlin is a new OpenStack project that provides a generic clustering service for OpenStack clouds. It’s capable of managing homogeneous objects exposed by other OpenStack components, including Nova, Heat, or Cinder, making it of interest to anyone using, or thinking of using, VMware Integrated OpenStack.

Voelker opens by reviewing the generic requirements for OpenStack clustering, which include simple manageability, expandability on demand, load-balancing, customizability to real-life use cases, and extensibility.

OpenStack already offers limited cluster management capabilities through Heat’s orchestration service, he notes. But Heat’s mission is to orchestrate composite cloud apps using a declarative template format through an OpenStack-native API. While functions like auto-scaling, high availability, and load balancing are complimentary to that mission, having those functions all in a single service isn’t ideal.

“We thought maybe we should think about cluster management as a first class service that everything else could tie into,” Volker recalls, which is where Senlin comes in.

Teng then describes Senlin’s origin, which started as an effort to build within Heat, but soon moved to offload Heat’s autoscaling capabilities into a separate project that expanded OpenStack autoscaling offerings more comprehensively, becoming OpenStack’s first dedicated clustering service.

Senlin is designed to be scalable, load-balanced, highly-available, and manageable, Teng explains, before outlining its server architecture and detailing the operations it supports. “Senlin can manage almost any object,” he says. “It can be another server, a Heat stack, a single volume or floating IP protocol, we don’t care. We wanted to just build a foundational service allowing you to manage any type of resource.”

To end the session, Li offers a demo of how Senlin creates a resilient, auto-scaling cluster with both high availability and load balancing in as little as five minutes.

On 9/30/2016, VMware announced VMware Integrated OpenStack 3.0 at VMWorld in Las Vegas. We are truly excited about our latest OpenStack distribution that gives our customers the new features and enhancements included in the latest Mitaka release, an optimized management control plane architecture, and the ability leverage existing workloads in your OpenStack cloud.

OpenStack Mitaka Support
VMware Integrated OpenStack 3.0 customers can leverage the great features and enhancements in the latest OpenStack release. Mitaka addresses manageability, scalability, and a greater user experience. To learn more about the Mitaka release, visit the OpenStack.org site at https://www.openstack.org/software/mitaka/

Easily Import Existing Workloads
The ability to now directly import vSphere VMs into OpenStack and run critical Day 2 operations against them via OpenStack APIs enables you to quickly move existing development project or production workloads to the OpenStack Framework.

Compact Management Control Plane
Building on enhancements from previous releases, organizations looking to evaluate OpenStack or to build OpenStack clouds for branch locations quickly and cost effectively can easily deploy in as little as 15 minutes. The VMware Integrated OpenStack 3.0 architecture has been optimized to support a compact architecture mode that dramatically reduces the infrastructure footprint saving resource costs and overall operational complexity.

If you are at VMWorld2016 in Las Vegas, we invite you to attend the following sessions to hear how our customers are using VMware Integrated OpenStack and learn more details about this great upcoming release.

Their session opens with HedgeServ Global Director of Platform Engineering Isa Berisha describing how the company uses OpenStack and why it picked VIO to run HedgeServ’s OpenStack deployment.

Adopting OpenStack was the company’s best option for meeting the rapidly escalating demand it was facing, Berisha explains. But his team were stymied by the limitations of the professional and managed services OpenStack deployments available to them, which were rife with issues of quality, expense, speed-of-execution, and vendor instability. Eventually they decided try VIO, despite it being new at that point and little known in the industry.

“We downloaded it, deployed it, and something crazy happened: it just worked,” recalls Berisha. “If you’ve tried to do your own OpenStack deployments, you can understand how that feels.”

HedgeServ now uses VIO to run up to 1,000 large window instances (averaging 64GB) managed by a team of just two. vSphere and ESXi’s speed and their ability to handle hosts of up to 10TB of RAM and hundreds of physical cores have been key elements of HedgServ’s success so far, explains senior engineer McAteer. But VIO’s storage and management capabilities have also been essential, as is its stability and the fact that it’s backed by VMware support and easily upgraded and updated live.

Simplifying operations around OpenStack and making it easier to maintain OpenStack in a production deployment has been a major focus for VMware’s VIO development team, adds VMware product manager Sundararaman in the second half of the presentation.

“There are a lot of workflows that we’ve added to make it really easy to operate OpenStack,” he says.

As an example, Sundararaman demos VIO’s OpenStack upgrade process, showing how it’s achieved entirely via the VIO control panel in a vSphere web client plugin. VIO uses the blue-green upgrade paradigm to stand up a completely new control plane based on the new distribution, he explains, and then migrates the data and configurations from the old to the new control plane, while leaving the deployed workloads untouched.

“We ran and upgraded VIO from 1.0 to 2.0 without talking to Santhosh,”” adds McAteer. “That’s a big deal in the OpenStack world but it just, again, worked. The blue-green nature of the upgrade path really makes that very easy.”