Monthly Archives: August 2016

On 9/30/2016, VMware announced VMware Integrated OpenStack 3.0 at VMWorld in Las Vegas. We are truly excited about our latest OpenStack distribution that gives our customers the new features and enhancements included in the latest Mitaka release, an optimized management control plane architecture, and the ability leverage existing workloads in your OpenStack cloud.

OpenStack Mitaka Support
VMware Integrated OpenStack 3.0 customers can leverage the great features and enhancements in the latest OpenStack release. Mitaka addresses manageability, scalability, and a greater user experience. To learn more about the Mitaka release, visit the OpenStack.org site at https://www.openstack.org/software/mitaka/

Easily Import Existing Workloads
The ability to now directly import vSphere VMs into OpenStack and run critical Day 2 operations against them via OpenStack APIs enables you to quickly move existing development project or production workloads to the OpenStack Framework.

Compact Management Control Plane
Building on enhancements from previous releases, organizations looking to evaluate OpenStack or to build OpenStack clouds for branch locations quickly and cost effectively can easily deploy in as little as 15 minutes. The VMware Integrated OpenStack 3.0 architecture has been optimized to support a compact architecture mode that dramatically reduces the infrastructure footprint saving resource costs and overall operational complexity.

If you are at VMWorld2016 in Las Vegas, we invite you to attend the following sessions to hear how our customers are using VMware Integrated OpenStack and learn more details about this great upcoming release.

Their session opens with HedgeServ Global Director of Platform Engineering Isa Berisha describing how the company uses OpenStack and why it picked VIO to run HedgeServ’s OpenStack deployment.

Adopting OpenStack was the company’s best option for meeting the rapidly escalating demand it was facing, Berisha explains. But his team were stymied by the limitations of the professional and managed services OpenStack deployments available to them, which were rife with issues of quality, expense, speed-of-execution, and vendor instability. Eventually they decided try VIO, despite it being new at that point and little known in the industry.

“We downloaded it, deployed it, and something crazy happened: it just worked,” recalls Berisha. “If you’ve tried to do your own OpenStack deployments, you can understand how that feels.”

HedgeServ now uses VIO to run up to 1,000 large window instances (averaging 64GB) managed by a team of just two. vSphere and ESXi’s speed and their ability to handle hosts of up to 10TB of RAM and hundreds of physical cores have been key elements of HedgServ’s success so far, explains senior engineer McAteer. But VIO’s storage and management capabilities have also been essential, as is its stability and the fact that it’s backed by VMware support and easily upgraded and updated live.

Simplifying operations around OpenStack and making it easier to maintain OpenStack in a production deployment has been a major focus for VMware’s VIO development team, adds VMware product manager Sundararaman in the second half of the presentation.

“There are a lot of workflows that we’ve added to make it really easy to operate OpenStack,” he says.

As an example, Sundararaman demos VIO’s OpenStack upgrade process, showing how it’s achieved entirely via the VIO control panel in a vSphere web client plugin. VIO uses the blue-green upgrade paradigm to stand up a completely new control plane based on the new distribution, he explains, and then migrates the data and configurations from the old to the new control plane, while leaving the deployed workloads untouched.

“We ran and upgraded VIO from 1.0 to 2.0 without talking to Santhosh,”” adds McAteer. “That’s a big deal in the OpenStack world but it just, again, worked. The blue-green nature of the upgrade path really makes that very easy.”

Some developers may avoid utilizing an OpenStack cloud, despite the advantages they stand to gain, because they have already established automation workflows using popular open source tools with public cloud providers.

But in a joint presentation to the 2016 OpenStack Summit, VMware Senior Technical Marketing Manager Trevor Roberts Jr.’s and VMware NSX Engineering Architect Scott Lowe explain how developers can take advantage of OpenStack clouds using the same open source tools that they already know.

Their talk runs through a configuration sequence, including image management, dev/test, and production deployment, showing how standard open source tools that developers already use for non-OpenStack deployments can run in exactly the same way with an OpenStack cloud. In this case, they discuss using Packer for image building, Vagrant and Docker Machine for software development and testing, and Terraform for production grade deployments.

“This is a way of using an existing tool that you’re already comfortable and familiar with to start consuming your OpenStack cloud,” says Roberts, before demoing an OpenStack customized image build with Packer and then using that image to create and deploy an OpenStack-provisioned instance with Vagrant.

Lowe next profiles Docker Machine, which provisions instances of the Docker Engine for testing, and shows how you can use Docker Machine to spin up instances of Docker inside an OpenStack cloud.

Lastly, Lowe demos Terraform, which offers an infrastructure-as-a-service/cloud services approach to production deployments on multiple platforms (it’s similar to Heat for OpenStack), creating an entire OpenStack infrastructure with a single click, including a new network and router, and launching multiple new instances with floating IP addresses for each, ready for pulling down containers as required.

“It’s very simple to use these development tools with OpenStack,” adds Roberts. “It’s just a matter of making sure your developers feel comfortable with the environment and letting them know there are plugins out there – you don’t have to keep using other infrastructures.”

As Roberts notes in the presentation, VMware itself is “full in on OpenStack – we contribute to projects that aren’t even in our own distribution just to help out the community.” Meanwhile, VMware’s own OpenStack solution – VMware Integrated OpenStack (VIO) – offers, a DefCore-compliant OpenStack distribution specifically configured to use open source drivers to manage VMware infrastructure technologies, further aiding the adoption process for developers already familiar with vSphere, vRealize, and NSX.

OVN (pronounced “oven”) is a rapidly growing, open source solution being developed by the Open vSwitch (OVS) community that provides network virtualization for OVS. While OVN isn’t designed to work with VMware Integrated OpenStack, it’s another OpenStack project to which VMware has been devoting time and effort, and definitely worth knowing about.

For a good sense of how OVN is progressing, check out this talk by four OVS community members at the 2016 OpenStack Summit. They explain how OVN works and why it’s worth trying.

VMware OVS developer Ben Pfaff kicks things off with an overview of network virtualization, emphasizing the value of being able to abstract a physical network and of making network provisioning self-service.

Fellow VMware engineer and core OVS and OVN developer Justin Pettit next outlines OVN’s capabilities and stresses its compatibility with the platforms that OVS already works with. When it comes to OpenStack, he reports, “the best integration that we have right now is with OpenStack Neutron but we plan to have it work with other CMSes . . . and you can do everything that you would want through the command line or through data base calls that you can do through Neutron.”

Like OVS, OVN is open source and vendor-neutral, and has quickly gained support from a diverse group of vendors including VMware, IBM, Red Hat, and eBay among others. The goal is to match OVS production quality and keep OVN’s design simple but scalable to 1,000s of hypervisors. “We hope it becomes the preferred method for most people who want to use OVS or networking in general,” Pettit says.

If successful, OVN will expand OVS, help improve Neutron’s functionality, and significantly reduce the development burden on Neutron for OVS integration. Add an improved architecture built around ‘logical flows’ and configuration coordinated through databases, and it’s set to outperform existing OVS networking plugins, Pfaff argues.

The same goes for security, adds Ryan Moats of IBM – OVN now uses a connection tracker, letting OVS manage state-full connections itself and speeding security group throughput significantly. Its L3 security group design also does all L3 processing in OVS, further improving performance.

The fourth speaker, Han Zhou of eBay, outlines how the group overcame a series of bottlenecks to scale the OVN control plane to 2,000 hypervisors, 20,000 VIF ports and 200 and logical switches operating at once.

The team then highlights ongoing scale improvements and profiles the OVN Neutron plugin. “We will run this in our public cloud,” says IBM’s Moats before outlining OVN deployment and what to look for in the upcoming OVN release. Finally, all four speakers invite their audience to contribute to OVN, and try it out for themselves.

VMware Integrated OpenStack is also available for testing in VMware’s Hands-on Lab. Or download it for a free with a current license for vSphere Enterprise Plus, vSphere Operations Management, or NSX with vSphere Standard.

In “OpenStack for VMware Administrators,” Roberts offers a valuable overview of what makes VMware technologies ideal for running OpenStack workloads and explains how VIO supplies everything you need to install, upgrade, and operate an OpenStack cloud on top of the VMware technologies you already own.

He opens with a review of VMware’s longtime commitment to OpenStack development and a reminder that the cloud platform must run on some kind of virtual infrastructure that supplies the underlying hypervisor, networking, and storage. VMware’s infrastructure technologies, he notes, are just as valid an option for OpenStack clouds as any other infrastructure platform.

VIO is a distribution of OpenStack that enables the open source drivers for VMware infrastructure by default, reliably connecting your cloud with signature VMware products like vSphere for compute and storage, NSX for networking, and vRealize management solutions for operations.

Most crucially, Roberts argues, VIO lets you pair your current VMware solutions with your new, OpenStack cloud, saving the time and cost of adopting additional infrastructure technologies and increasing the scalability, availability, and reliability of your existing applications.

VMware supports customers whether they prefer a tightly-integrated approach (using VIO) or a more loosely-integrated framework (using another OpenStack solution, for example). “We want you to be successful with OpenStack on vSphere regardless of the distribution that you use.” Roberts says.

But VIO offers many distinct advantages for administrators already running VMware technology, he explains, including speed, reliability, ease of use, a single point of contact for support, and regular upgrades.

Roberts then hands over to, Ken Rugg, CEO of database-as-a-service (DBaaS) platform company Tesora, to show how VIO is extended through partnerships with specialist organizations. Tesora enables and simplifies access for to up to 15 popular databases from an OpenStack cloud via an enterprise-hardened version of OpenStack Trove.

The presentation ends with a brief demonstration of how to create a Trove database instance with Tesora – showing how the combination of Tesora + VIO makes it simple to provision a database for your application without deploying, configuring, and managing the database binaries yourself.

The Amadeus IT Group is a multi-national IT service provider to the global travel industry with over 3 billion euros in revenue. Two years ago it embarked on a transformation project to modernize its infrastructure.

In their talk, Chaitanya and Knopper outline some of the business drivers for the project, which included readying their infrastructure to deploy next generation cloud native applications based on containers and building an entirely new, highly-reliable hotel guest reservation system using RedHat OpenShift PaaS.
Those drivers established a set of business requirements, such as speeding service delivery, instigating end-to-end automation and ensuring 99.999% service uptime, along with technical requirements that included a fault-resilient application architecture based on OpenShift and Kubernetes, and fast and automatic provisioning using OpenStack Heat.

Knopper details the variety of options (public cloud, alternative service providers etc.) that Amadeus considered for meeting their requirements. But their best option, he explains, was to build a product architecture featuring an underlying VMware infrastructure running OpenStack loads via VIO and NSX.

VMware’s technical reliability and the support it offered were crucial factors, says Knopper, as was Amadeus’ ability to leverage its existing experience with vSphere to get the project moving quickly.

The results have been impressive. Where it used to take weeks to bring up an application, Knopper notes, “with the solution we have at hand, this has been reduced down to around 50 minutes.” The new approach delivers the fault tolerance required and lets Amadeus deliver more frequent updates to their end users.

The talk winds up with suggestions for best practices for building private OpenStack clouds with VIO based on Amadeus’ experience, and an outline of their plans for continued technical improvement in partnership with VMware.

“What’s really important for success with OpenStack is having a clear driver for what you are trying to do, and then translating that into clear requirements,” emphasizes Chaitanya in conclusion. “Then if you have a very clear execution plan and break it into phases, your chances of success are high.”