A blog that's no longer mainly about Java…

On the Monday of the project teams gathering in Dublin a now somewhat familiar gathering of developers and operators got together to discuss upgrades – specifically fast forward upgrades but discussion over the day drifted into rolling upgrades and how to minimize downtime in supporting components as well. This discussion has been a regular feature over the last 18 months at PTG’s, Forums and Ops Meetups.

Fast Forward Upgrades?

So what is a fast forward upgrade? A fast forward upgrade takes an OpenStack deployment through multiple OpenStack releases without the requirement to run agents/daemons at each upgrade step; it does not allow you to skip an OpenStack release – the process allows you to just not run a release as you pass through it. This enables operators using older OpenStack releases to catch up with the latest OpenStack release in as short an amount of time as possible, accepting the compromise that the cloud control plane is down during the upgrade process.

This is somewhat adjunct to a rolling upgrade, where access to the control plane of the cloud is maintained during the upgrade process by upgrading units of a specific service individually, and leveraging database migration approaches such as expand/migrate/contract (EMC) to provide as seamless an upgrade process as possible for an OpenStack cloud. In common with fast forward upgrades, releases cannot be skipped.

Both upgrade approaches specifically aim to not disrupt the data plane of the cloud – instances, networking and storage – however this may be unavoidable if components such as Open vSwitch and the Linux kernel need to be upgraded as part of the upgrade process.

Deployment Project Updates

The TripleO team have been working towards fast forward upgrades during the Queens cycle and have a ‘pretty well defined model’ for what they’re aiming for with their upgrade process. They still have some challenges around ordering to minimize downtime specifically around Linux and OVS upgrades.

The OpenStack Ansible team gave an update – they have a concept of ‘leap upgrades’ which is similar to fast-forward upgrades – this work appears to lag behind the main upgrade path for OSA, which is a rolling upgrade approach which aims to be 100% online.

The OpenStack Charms team still continue to have a primary upgrade focus on rolling upgrades, minimizing downtime as much as possible for both the control and data plane of the Cloud. The primary focus for this team right now is supporting upgrades of the underlying Ubuntu OS between LTS releases with the imminent release of 18.04 on the horizon in April 2018, so no immediate work is planned on adopting fast-forward upgrades.

The Kolla team also have a primary focus on rolling upgrades, for which support starts at OpenStack Queens or later. There was some general discussion around automated configuration generation using Oslo to ease migration between OpenStack releases.

No one was present to represent the OpenStack Helm team.

Keeping Networking Alive

Challenges around keeping the Neutron data-plane alive during an upgrade where discussed – this included:

Use of the ‘neutron-ha-tool’ from AT&T to manage routers across network nodes during an OpenStack cloud upgrade – there was also a bit of bike shedding on approaches to Neutron router HA in larger clouds. Plan are afoot to endeavor to make this part of the neutron code base.

Ceph Upgrades

We had a specific slot to discuss upgrade Ceph as part of an OpenStack Cloud upgrade; some deployment projects upgrade Ceph first (Charms), some last (TripleO) but there was general agreement that Ceph upgrades are pretty much always a rolling upgrade – i.e. no disruption to the storage services being provided. Generally there seems to be less pain in this area so it was not a long session.

Operator Feedback

A number of operators shared experiences of walking their OpenStack deployments through fast forward upgrades including some of the gotchas and trip hazards encountered.

Oath provided a lot of feedback on their experience of fast-forward upgrading their cloud from Juno to Ocata which included some increased complexity due to the move to using cells internally for Ocata. Ensuring compatibility between OpenStack and supporting projects was one challenge encountered – for example, snapshots worked fine with Juno and Libvirt 1.5.3, however on upgrade live snapshots where broken until Libvirt was upgraded to 2.9.0. Not all test combinations are covered in the gate!

Upgrade SIG

Upgrade discussion has become a regular fixture at PTG’s, Forums, Summits and Meetups over the last few years; getting it right is tricky and the general feeling in the session was that this is something that we should talk about more between events.

The formation of an Upgrade SIG was proposed and supported by key participants in the session. The objective of the SIG is to improve the overall upgrade process for OpenStack Clouds, covering both offline ‘fast-forward’ and online ‘rolling’ upgrades by providing a forum for cross-project collaboration between operators and developers to document and codify best practice for upgrading OpenStack.

The SIG will initially be led by Lujin Luo (Fujitsu), Lee Yarwood (Redhat) and myself (Canonical) – we’ll be sorting out the schedule for bi-weekly IRC meetings in the next week or so – OpenStack operators and developers from across all projects are invited to participate in the SIG and help move OpenStack life cycle management forward!

This summary is intended to be a regular communication of activities and plans happening in and around Ubuntu OpenStack, covering but not limited to the distribution and deployment of OpenStack on Ubuntu.

If there is something that you would like to see covered in future summaries, or you have general feedback on content please feel free to reach out to me (jamespage on Freenode IRC) or any of the OpenStack Engineering team at Canonical!

Development Release

Since the last dev summary, OpenStack Queens Cloud Archive pockets have been setup and have received package updates for the first and second development milestones – you can install them on Ubuntu 16.04 LTS using:

sudo add-apt-repository cloud-archive:queens[-proposed]

OpenStack Queens will also form part of the Ubuntu 18.04 LTS release in April 2018, so alternatively you can try out OpenStack Queens using Ubuntu Bionic directly.

You can always test with up-to-date packages built from project branches from the Ubuntu OpenStack testing PPA’s:

sudo add-apt-repository ppa:openstack-ubuntu-testing/queens

Nova LXD

No significant feature work to report on since the last dev summary.

The OpenStack Ansible team have contributed an additional functional gate for nova-lxd – its currently non-voting, but does provide some additional testing feedback for nova-lxd developers during the code review process. If it proves stable and useful, we’ll make this a voting check/gate.

OpenStack Charms

Ceph charm migration

Since the last development summary, the Charms team released the 17.11 set of stable charms; this includes a migration path for users of the deprecated ceph charm to using ceph-mon and ceph-osd. For full details on this process checkout the charm deployment guide.

Queens Development

As part of the 17.11 charm release a number of charms switched to execution of charm hooks under Python 3 – this includes the nova-compute, neutron-{api,gateway,openvswitch}, ceph-{mon,osd} and heat charms; once these have had some battle testing, we’ll focus on migrating the rest of the charm set to Python 3 as well.

Charm changes to support the second Queens milestone (mainly in ceilometer and keystone) and Ubuntu Bionic are landing into charm development to support ongoing testing during the development cycle. OpenStack Charm deployments for Queens and later will default to using the Keystone v3 API (v2 has been removed as of Queens). Telemetry users must deploy Ceilometer with Gnocchi and Aodh as the Ceilometer API has now been removed from charm based deployments and from the Ceilometer codebase. You can install the current tip of charm development using the the openstack-charmers-next prefix for charmstore URL’s – for example:

juju deploy cs:~openstack-charmers-next/neutron-api

ZeroMQ support has been dropped from the charms; having no know users and no functional testing in the gate and having issued deprecation warnings in release notes it was time to drop the associated code from the code base. PostgreSQL and deploy from source are also expected to be removed from the charms this cycle.

You can read the full list of specs currently scheduled for Queens here.

Releases

The last stable charm release went out at the end of November including the first stable release of the Gnocchi charm – you can read the full details in the release notes. The next stable charm release will take place in February alongside OpenStack Queens, with a release shortly after the Ubuntu 18.04 LTS release in May to sweep up any pending LTS support and fixes needed.

IRC (and meetings)

As always, you can participate in the OpenStack charm development and discussion by joining the #openstack-charms channel on Freenode IRC; we also have a weekly development meeting in #openstack-meeting-4 at either 1000 UTC (odd weeks) or 1700 UTC (even weeks) – see http://eavesdrop.openstack.org/#OpenStack_Charms for more details. The next IRC meeting will be on the 8th of January at 1700 UTC.

Next week at the OpenStack Summit in Sydney we have a few sessions scheduled for the OpenStack Charms project.

If you’re new to OpenStack deployment using Juju and the OpenStack Charms then the general project update on Tuesday at 3.20 pm would be a good introduction. The session is only 20 minutes long so won’t take up to much of your day – Ryan and I will be doing a short 101 and providing some detail on new features for Pike and plans for Queens!

If you would like to get involved with OpenStack Charm development then pop along to the project on-boarding session at 3.10 pm on Monday – this session will be much more hands on and we’ll drive content based on what participants need, rather than having a fixed agenda,

If you’re an OpenStack Charm user and would like the opportunity to provide direct feedback to the development team then please come and tell us what you like and don’t like in the operators feedback session in the Forum on Tuesday at 9.50 am.

Looking forward to another great summit and seeing the other side of the planet for the first time – see you all in Sydney next week!

This summary is intended to be a regular communication of activities and plans happening in and around Ubuntu OpenStack, covering but not limited to the distribution and deployment of OpenStack on Ubuntu.

If there is something that you would like to see covered in future summaries, or you have general feedback on content please feel free to reach out to me (jamespage on Freenode IRC) or any of the OpenStack Engineering team at Canonical!

Development Release

OpenStack Pike released in August and is install-able on Ubuntu 16.04 LTS using the Ubuntu Cloud Archive:

sudo add-apt-repository cloud-archive:pike

OpenStack Pike also forms part of the Ubuntu 17.10 release later this month; final charm testing is underway in preparation for full Artful support for the charm release in November.

We’ll be opening the Ubuntu Cloud Archive for OpenStack Queens in the next two weeks; the first uploads will be the first Queens milestones, which will coincide nicely with the opening of the next Ubuntu development release (which will become Ubuntu 18.04 LTS).

OpenStack Snaps

The main focus in the last few weeks has been on testing of the gnocchi snap, which is currently install-able from the edge channel:

sudo snap install --edge gnocchi

The gnocchi snap provides the gnocchi-api (nginx/uwsgi deployed) and gnocchi-metricd service; Due to some incompatibilities between gnocchi/cradox/python-rados the snap is currently based on the 3.1.11 release; hopefully we should work through the issues with the 4.0.x release in the next week or so, as well as having multiple tracks setup for this snap so you can consume a version known to be compatible with a specific OpenStack release.

Nova LXD

The team is currently planning work for the Queens development cycle; pylxd has received a couple of new features – specifically support for storage pools as provided in newer LXD versions, and streaming of image uploads to LXD which greatly reduces the memory footprint of client applications during uploads.

OpenStack Charms

Queens Planning

Out of the recent Queens PTG, we have a number of feature specs landed in the charms specification repository . There are a few more in the review queue; if you’re interested in plans for the Queens release of the charms next year, this is a great place to get a preview and provide the team feedback on the features that are planned for development.

Deployment Guide

The first version of the new Charm Deployment Guide has now been published to the OpenStack Docs website; we have a small piece of followup work to complete to ensure its published alongside other deployment project guides, but hopefully that should wrap up in the next few days. Please give the guide a spin and log any bugs that you might find!

Bugs

Over the last few weeks there has been an increased level of focus on the current bug triage queue for the charms; from a peak of 600 open bugs two weeks ago, with around 100 pending triage, we’ve closed out 70 bugs and the triage queue is down to a much more manageable level. The recently introduced bug triage rota has helped with this effort and should ensure we keep on-top of incoming bugs in the future.

Releases

In the run-up to the August charm release, a number of test scenarios which required manual execution where automated as part of the release testing activity; this automation work reduces the effort to produce the release, and means that the majority of test scenarios can be run on a regular basis. As a result, we’re going to move back to a three month release cycle; the next charm release will be towards the end of November after the OpenStack summit in Sydney.

IRC (and meetings)

As always, you can participate in the OpenStack charm development and discussion by joining the #openstack-charms channel on Freenode IRC; we also have a weekly development meeting in #openstack-meeting-4 at either 1000 UTC (odd weeks) or 1700 UTC (even weeks) – see http://eavesdrop.openstack.org/#OpenStack_Charms for more details.

Last week, myself and a number of the OpenStack Charms team had the pleasure of attending the OpenStack Project Teams Gathering in Denver, Colorado.

The first two days of the PTG where dedicated to cross project discussions, with the last three days focused on project specific discussion and work in dedicated rooms.

Here’s a summary of the charm related discussion over the week.

Cross Project Discussions

Skip Level Upgrades

This topic was discussed at the start of the week, in the context of supporting upgrades across multiple OpenStack releases for operators. What was immediately evident was this was really a discussion around ‘fast-forward’ upgrades, rather than actually skipping any specific OpenStack series as part of a cloud upgrade. Deployments would still need to step through each OpenStack release series in turn, so the discussion centred around how to make this much easier for operators and deployment tools to consume than it has been to-date.

There was general agreement on the principles that all steps required to update a service between series should be supported whilst the service is offline – i.e. all database migrations can be completed without the services actually running; This would allow multiple upgrade steps to be completed without having to start services up on interim steps. Note that a lot of projects all ready support this approach, but its never been agreed as a general policy as part of the ‘supports-upgrade‘ tag which was one of the actions resulting from this discussion.

In the context of the OpenStack Charms, we already follow something along these lines for minimising the amount of service disruption in the control plane during OpenStack upgrades; with implementation of this approach across all projects, we can avoid having to start up services on each series step as we do today, further optimising the upgrade process delivered by the charms for services that don’t support rolling upgrades.

Policy in Code

Most services in OpenStack rely on a policy.{json,yaml} file to define the policy for role based access into API endpoints – for example, what operations require admin level permissions for the cloud. Moving all policy default definitions to code rather than in a configuration file is a goal for the Queens development cycle.

This approach will make adapting policies as part of an OpenStack Charm based deployment much easier, as we only have to manage the delta on top of the defaults, rather than having to manage the entire policy file for each OpenStack release. Notably Nova and Keystone have already moved to this approach during previous development cycles.

Deployment (SIG)

During the first two days, some cross deployment tool discussions where held for a variety of topics; of specific interest for the OpenStack Charms was the discussion around health/status middleware for projects so that the general health of a service can be assessed via its API – this would cover in-depth checks such as access to database and messaging resources, as well as access to other services that the checked service might depend on – for example, can Nova access Keystone’s API for authentication of tokens etc. There was general agreement that this was a good idea, and it will be proposed as a community goal for the OpenStack project.

OpenStack Charms Devroom

Keystone: v3 API as default

The OpenStack Charms have optionally supported Keystone v3 for some time; The Keystone v2 API is officially deprecated, so we had discussion around approach for switching the default API deployed by the charms going forwards; in summary

New deployments should default to the v3 API and associated policy definitions

Existing deployments that get upgraded to newer charm releases should not switch automatically to v3, limiting the impact of services built around v2 based deployments already in production.

The charms already support switching from v2 to v3, so v2 deployments can upgrade as and when they are ready todo so.

At some point in time, we’ll have to automatically switch v2 deployments to v3 on OpenStack series upgrade, but that does not have to happen yet.

Keystone: Fernet Token support

The charms currently only support UUID based tokens (since PKI was dropped from Keystone); The preferred format is now Fernet so we should implement this in the charms – we should be able to leverage the existing PKI key management code to an extent to support Fernet tokens.

Stable Branch Life-cycles

Currently the OpenStack Charms team actively maintains two branches – the current development focus in the master branch, and the most recent stable branch – which right now is stable/17.08. At the point of the next release, the stable/17.08 branch is no longer maintained, being superseded by the new stable/XX.XX branch. This is reflected in the promulgated charms in the Juju charm store as well. Older versions of charms remain consumable (albeit there appears to be some trimming of older revisions which needs investigating). If a bug is discovered in a charm version from a inactive stable branch, the only course of action is to upgrade the the latest stable version for fixes, which may also include new features and behavioural changes.

There are some technical challenges with regard to consumption of multiple stable branches from the charm store – we discussed using a different team namespace for an ‘old-stable’ style consumption model which is not that elegant, but would work. Maintaining more branches means more resource effort for cherry-picks and reviews which is not feasible with the currently amount of time the development team has for these activities so no change for the time being!

Service Restart Coordination at Scale

tl;dr no one wants enabling debug logging to take out their rabbits

When running the OpenStack Charms at scale, parallel restarts of daemons for services with large numbers of units (we specifically discussed hundreds of compute units) can generate a high load on underlying control plane infrastructure as daemons drop and re-connect to message and database services potentially resulting in service outages. We discussed a few approaches to mitigate this specific problem, but ended up with focus on how we could implement a feature which batched up restarts of services into chunks based on a user provided configuration option.

We also had some good conversation around how unit level overrides for some configuration options would be useful – supporting the use case where a user wants to enable debug logging for a single unit of a service (maybe its causing problems) without having to restart services across all units to support this. This is not directly supported by Juju today – but we’ll make the request!

Cross Model Relations – Use Cases

We brainstormed some ideas about how we might make use of the new cross-model relation features being developed for future Juju versions; some general ideas:

Multiple Region Cloud Deployments

Keystone + MySQL and Dashboard in one model (supporting all regions)

Each region (including region specific control plane services) deployed into a different model and controller, potentially using different MAAS deployments in different DC’s.

Keystone Federation Support

Use of Keystone deployments in different models/controllers to build out federated deployments, with one lead Keystone acting as the identity provider to other peon Keystones in different regions or potentially completely different OpenStack Clouds.

We’ll look to use the existing relations for some of these ideas, so as the implementation of this feature in Juju becomes more mature we can be well positioned to support its use in OpenStack deployments.

Deployment Duration

We had some discussion about the length of time taken to deploy a fully HA OpenStack Cloud onto hardware using the OpenStack Charms and how we might improve this by optimising hook executions.

There was general agreement that scope exists in the charms to improve general hook execution time – specifically in charms such as RabbitMQ and Percona XtraDB Cluster which create and distribute credentials to consuming applications.

We also need to ensure that we’re tracking any improvements made with good baseline metrics on charm hook execution times on reference hardware deployments so that any proposed changes to charms can be assessed in terms of positive or negative impact on individual unit hook execution time and overall deployment duration – so expect some work in CI over the next development cycle to support this.

As a follow up to the PTG, the team is looking at whether we can use the presence of a VIP configuration option to signal to the charm to postpone any presentation of access relation data to the point after which HA configuration has been completed and the service can be accessed across multiple units using the VIP. This would potentially reduce the number (and associated cost) of interim hook executions due to pre-HA relation data being presented to consuming applications.

Mini Sprints

On the Thursday of the PTG, we held a few mini-sprints to get some early work done on features for the Queens cycle; specifically we hacked on:

The OpenStack Charms team is pleased to announce that the 17.08 release of the OpenStack Charms is now available from jujucharms.com!

In addition to 204 bug fixes across the charms and support for OpenStack Pike, this release includes a new charm for Gnocchi, support for Neutron internal DNS, Percona Cluster performance tuning and much more.

For full details of all the new goodness in this release please refer to the release notes.

The Ubuntu OpenStack team at Canonical is pleased to announce the general availability of OpenStack Pike for Ubuntu 16.04 LTS via the Ubuntu Cloud Archive. Details of the Pike release can be found in the OpenStack release notes for Pike.

Ubuntu 16.04 LTS

You can enable the Ubuntu Cloud Archive pocket for OpenStack Pike on Ubuntu 16.04 LTS installations by running the following commands:

Firstly apologies for the lack of update two weeks ago; your author’s attempt to schedule publication whilst he was on holiday failed miserably – so this edition is a bit of a wrap up of the last 3-4 weeks of activities.

This summary is intended to be a regular communication of activities and plans happening in and around Ubuntu OpenStack, covering but not limited to the distribution and deployment of OpenStack on Ubuntu.

If there is something that you would like to see covered in future summaries, or you have general feedback on content please feel free to reach out to me (jamespage on Freenode IRC) or any of the OpenStack Engineering team at Canonical!

Expect new point releases for Ceph (10.2.9) and Open vSwitch (2.5.3) for the next SRU cycle in September.

Development Release

OpenStack Pike release packages should be available in the Ubuntu Cloud Archive and in Ubuntu Artful this week, along with Ceph Luminous 12.2.0, Open vSwitch 2.8.0~ and updates to the latest libvirt and qemu versions; you can test with them today in the proposed testing area:

sudo add-apt-repository cloud-archive:pike-proposed

We’ll make a more detailed release announcement once final testing has been completed and package updates are available in the -updates pocket.

Remember that it’s also possible to consume OpenStack packages built from the tip of the upstream git repository master branches using:

sudo add-apt-repository ppa:openstack-ubuntu-testing/pike

Packages are automatically built for Artful and Xenial.

OpenStack Snaps

We’ve been polishing the snapstack testing tool, working on decoupling it from the snap-test scripts, in preparation for deprecating snap-test. We’ve also improved its performance and reliability when running in the OpenStack Gerrit gate by switching over to using smaller, lighter cirros images, and using tarball URLs that route through the zuul reverse proxy whenever possible. We’ve also added some features to support better use of snapstack from behind proxies.

Work has also begun on the Gnocchi snap, which will be used in the upcoming Gnocchi charm!

Nova LXD

The pike release of nova-lxd (16.0.0) was made on the 30th of August; this will be avaliable in Ubuntu Artful and the Pike Cloud Archive for Ubuntu 16.04 LTS.

OpenStack Charms

Pike Release

We’re right on top of the release of OpenStack Pike – the charm release will happen the week after the main OpenStack release on the 7th of September. Feature freeze was on the 24th August so development has shifted away from feature work towards working through the bug backlog. Look for more details in the release notes for the Charm release next week.

IRC (and meetings)

As always, you can participate in the OpenStack charm development and discussion by joining the #openstack-charms channel on Freenode IRC; we also have a weekly development meeting in #openstack-meeting-4 at either 1000 UTC (odd weeks) or 1700 UTC (even weeks) – see http://eavesdrop.openstack.org/#OpenStack_Charms for more details.

This summary is intended to be a regular communication of activities and plans happening in and around Ubuntu OpenStack, covering but not limited to the distribution and deployment of OpenStack on Ubuntu.

If there is something that you would like to see covered in future summaries, or you have general feedback on content please feel free to reach out to me (jamespage on Freenode IRC) or any of the OpenStack Engineering team at Canonical!

We’ve had a few questions about support for OpenStack Newton now that Ubuntu Yakkety has reached end-of-life. OpenStack Newton is supported via the Ubuntu Cloud Archive for Ubuntu 16.04 LTS for a further 9 months (18 months total).

Development Release

The second Ceph Luminous RC (12.1.1) has been uploaded to Artful and has been backported to the Ubuntu Cloud Archive for Pike. Time to start testing out Bluestore as an alternative block device format for Ceph OSD’s – which co-incidentally you can use via the current development versions of the Ceph charms.

OpenStack Pike milestone 3 releases this week; expect packages in the Ubuntu Cloud Archive for Pike in the first half of next week.

Remember that it’s also possible to consume packages built from the tip of the upstream git repository master branches using:

sudo add-apt-repository ppa:openstack-ubuntu-testing/pike

Packages are automatically built for Artful and Xenial.

OpenStack Snaps

The OpenStack snap set got a new member over the last two weeks – cinder. Tracks have now been setup in the Snap store for Ocata, Pike and Queens, with automatic publishing to ocata/edge channel right now.

SnapStack underwent some refactoring over the last two weeks and has official moved from ‘prototype’ to ‘beta’ testing; Reviews are up to enable SnapStack testing in the gate for selected snaps – expect to see more over the next few weeks.

Nova LXD

Not much to report for Nova LXD this time. Testing is still underway for the charm changes to support use of Cinder/Ceph storage back-ends with the Nova-LXD driver but those changes should still land for the upcoming charm release at the end of August.

OpenStack Charms

Pike Release

We’re only a month away from the release of OpenStack Pike, and the charm release typically goes out on the same day. Development is starting to shift away from feature work towards working through the bug backlog – feature freeze is on the 17th August, and we have a few final features to get development completed on between now and then.

IRC (and meetings)

As always, you can participate in the OpenStack charm development and discussion by joining the #openstack-charms channel on Freenode IRC; we also have a weekly development meeting in #openstack-meeting-4 at either 1000 UTC (odd weeks) or 1700 UTC (even weeks) – see http://eavesdrop.openstack.org/#OpenStack_Charms for more details.

This summary is intended to be a regular communication of activities and plans happening in and around Ubuntu OpenStack, covering but not limited to the distribution and deployment of OpenStack on Ubuntu.

If there is something that you would like to see covered in future summaries, or you have general feedback on content please feel free to reach out to me (jamespage on Freenode IRC) or any of the OpenStack Engineering team at Canonical!

Development Release

The first Ceph Luminous RC (12.1.0) has been uploaded to Artful and will be backported to the Ubuntu Cloud Archive for Pike soon.

OpenStack Pike b3 is due towards the end of July; we’ve done some minor dependency updates to support progression towards that goal. It’s also possible to consume packages built from the tip of the upstream git repository master branches using:

sudo add-apt-repository ppa:openstack-ubuntu-testing/pike

Packages are automatically built for Artful and Xenial.

OpenStack Snaps

Refactoring to support the switch back to strict mode snaps has been completed. Corey posted last week on ‘OpenStack in a Snap’ so we’ll not cover too much in this update; have a read to get the full low down.

Work continues on snapstack (the CI test tooling for OpenStack snap validation and testing), with changes landing this week to support Class-based setup/cleanup for the base cloud and a logical step/plan method for creating tests.

The move of snapstack to a Class-based setup/cleanup approach for the base cloud enables flexibility where the base cloud required to test a snap can easily be updated. By default this will provide a snap’s tests with a default OpenStack base cloud, however this can now easily be manipulated to add or remove services.

The snapstack code has also been updated to use a step/plan method for creating tests. These objects provide a simple and logical process for creating tests. The developer can now define the snap being tested, and it’s scripts/tests, in a step object. Each base snap and it’s scripts/tests are also define in individual step objects. All of these steps are then put together into a plan object, which is executed to kick off the deployment and tests.

For more details on snapstack you can check out the snapstack code here.

Nova LXD

The refactoring of the VIF plugging codebase to provide support for Linuxbridge and Open vSwitch + the native OVS firewall driver has been landed for Pike; this corrects a number of issues in the VIF plugging workflow between Neutron and Nova(-LXD) for these specific tenant networking configurations.

The nova-lxd subteam have also done some much needed catch-up on pull requests for pylxd (the underlying Python binding for LXD that nova-lxd uses); pylxd 2.2.4 is now up on pypi and includes fixes for improved forward compatibility with new LXD releases and support for passing network timeout configuration for API calls.

Work is ongoing to add support for LXD storage pools into pylxd.

OpenStack Charms

New Charms

Gnocchi will support deployment with MySQL (for indexing), Ceph (for storage) and Memcached (for coordination between Gnocchi metricd workers). We’re taking the opportunity to review and refresh the telemetry support across all of the charms, ensuring that the charms are using up-to-date configuration options and are fully integrated for telemetry reporting via Ceilometer (with storage in Gnocchi). This includes adding support for the Keystone, Rados Gateway and Swift charms. We’ll also be looking at the Grafana Gnocchi integration and hopefully coming up with some re-usable sets of dashboards for OpenStack resource metric reporting.

Deployment Guide

Thanks to help from Graham Morrison in the Canonical docs team, we now have a first cut of the OpenStack Charms Deployment Guide – you can take a preview look in its temporary home until we complete the work to move it up under docs.openstack.org.

This is very much a v1, and the team intends to iterate on the documentation over time, adding coverage for things like high-availability and network space usage both in the charms and in the tools that the charms rely on (MAAS and Juju).

IRC (and meetings)

As always, you can participate in the OpenStack charm development and discussion by joining the #openstack-charms channel on Freenode IRC; we also have a weekly development meeting in #openstack-meeting-4 at either 1000 UTC (odd weeks) or 1700 UTC (even weeks) – see http://eavesdrop.openstack.org/#OpenStack_Charms for more details.