As OpenStack’s 17th release comes to a cloud near you, Superuser answers some common questions about Queens and the ecosystem. Bonus answer, in case you were wondering: the name comes from Queens Park, a suburb of Sydney, Australia. (More on the release naming process here.)

What are the main trends in user adoption?

People want their clouds to do more than they used to, expanding to new workloads including machine learning and edge.

Integration is key. There’s a growing demand for large enterprises to run mission critical and backup as apps in a cloud environment, so the Queens release has a lot of high availability features, there’s a new project called Masakari and multi-attach in Cinder block storage. Multi-attach allows you to have more than one virtual machine attaching to the same volume, so if one goes down you can rely on another VM accessing the same storage.

What’s the current situation with GPU capabilities in OpenStack and where’s it headed?

People running GPUs today generally do it two ways:

PCI passthrough (the downside — it requires post-provisioning manual setup so it can be time consuming)

Early users of the underlying hardware include eBay and Commonwealth Bank. eBay is using Ironic on top of hardware layering TensorFlow and Kubernetes on top of the GPUs. Once these components are getting delivered as fully supported technologies inside OpenStack, it’s a pretty good indication the use case will go mainstream across a lot of different environments and organizations — and now it’s much easier to get started.

New for the Queens release is support for virtual GPUs. Nova now lets cloud administrators define flavors to request vGPU resources and set vGPU resolutions. The feature currently supports Intel GVT-g and NVIDIA GRID vGPU. See the Nova documentation for additional details.

What enhancements in Queens are the most important for edge computing?

The three most relevant things for edge are OpenStack-Helm, LOCI and acceleration support, which will be a requirement in a lot of edge environments.

Helm containerizes OpenStack services and puts them in a Kubernetes pod. (You can see how this works with a live upgrade demoed by AT&T at the Sydney Summit).

LOCI is a set of lightweight container images, the project takes a thinner approach to the container – offering a smaller, much more portable, lower footprint. Then operators can add configuration to the outside and use Kubernetes to manage the overall service functionality.

Together they make it easier to deploy a lot of OpenStack environments and to upgrade in an automated way as well as operate in a zero-touch way over the long term and with a smaller footprint than a traditional datacenter OpenStack deployment. Those attributes will all be very important in edge deployments.

There are a lot of existing projects for OpenStack and containers; where does OpenStack-Helm fit in?

OpenStack-Helm provides a collection of Helm charts and tools for managing the lifecycle of OpenStack on top of Kubernetes and running OpenStack projects as independent services. It’s a promising project for users who want to put OpenStack services at the edge, or who want to containerize OpenStack services for easier upgrade paths.

Which end users also contributed code to Queens?

A total of 158 companies and organizations contributed to this release; Some end users who also contributed code include:

AT&T

Verizon

Walmart

Universidade Federal de Campina Grande

Boston University

China Mobile

Deutsche Telekom

MIT

Johns Hopkins University Applied Physics Laboratory

HFBank

Workday

University of Melbourne

What’s the main direction for the next release, Rocky?

This week, the development teams are in Dublin at the PTG to start working on the Rocky release. Although the 18th release of OpenStack is not due until the end of August, there are already a few planned features to get excited about:

Fast forward upgrade work will continue, which is a method for letting users jump more than two releases ahead with alternative install paths that help users speed their way through the intermediary releases and get to where they want to be.

Minimum bandwidth and bandwidth-based scheduling is a networking feature that may be coming, which will be of particular interest to NFV and cloud service providers for things like ensuring a minimum level of performance for streaming services.

One of the community wide goals for Rocky is to enable mutable configuration across services, which will let operators change configuration settings without restarting a service.

Get involved!

Learn more about how organizations are using OpenStack at the upcoming Summit, meet the community and start contributing, or visit the Marketplace to find an OpenStack service provider.