TripleO - OpenStack on OpenStack

TripleO is a program aimed at installing, upgrading and operating OpenStack clouds using OpenStack's own cloud facilities as the foundations - building on nova, neutron and heat to automate fleet management at datacentre scale (and scaling down to as few as 2 machines).

TripleO is raw but usable today - see our tripleo-incubator for deployment instructions.

Folks working on TripleO are contributing to Nova, Neutron, Heat and Ironic to ensure they have the facilities needed to deploy to baremetal at scale. We also have a small number of projects we're shepherds for ourselves:

tripleo-incubator (docs)- our incubator - new code lives here until we decide what the right long term home for it is.

Design

Our overall story is to invest in robust solid automation such that we can do continuous integration and deployment testing of a cloud at the bare metal layer, then deploy the very same tested images to production clouds using nova baremetal (now ironic), rather than any separate management stack, leading to shared expertise in both deployments in the cloud, and of the cloud. Finally, because we can setup OpenStack in a fully HA environment, we can host the baremetal cloud used to deploy OpenStack in itself, and have a fully self sustaining HA cluster. On top of that we intend to build out a solid operations story - baseline monitoring autoconfigured as the overcloud - the cloud we deploy on top of the bare-metal "under" cloud scales up.

TripleO contributor cloud

Team responsibilities

As a team we have responsibility for the design and quality of the code we're creating, and to respond to critical bugs and security issues in that, do reviews, triage bugs and generally support our users.

However, we also have things we haven't released yet that are in the same codebases, and we don't want to run ourselves ragged treating every bug as a regression, unless it's actually in something we've delivered and are maintaining.

Maintained features/projects

Regressions in these things are firedrills which (as a team) we need to hop on and fix ASAP. If you find one please report it to use as a Critical bug on https://bugs.launchpad.net/tripleo/+filebug. If you're a TripleO contributor and you find one, or see that one has been reported, please add a Firedrill card to the [TripleO kanban] (Kanban is an experiment at the moment, but so far we're finding it pretty useful).

If a particular TripleO endeavour isn't listed here, it's not yet supported. If you want it to be supported, add a item for it to the next TripleO meeting

diskimage-builder

os-collect-config

os-apply-config

os-refresh-config

tripleo-image-elements

tripleo-heat-templates

The TripleO Cloud MVP2 : ATC's should have usercodes, and the cloud resets entirely every hour.

toci identified devtest story issues *within the TripleO code*. We'll move to supporting everything when we're in the integrated gate

Stable releases of OpenStack

We have a nascent plan to provide [stable branches]of tripleo-incubator - like other OpenStack projects, which should be going live very soon.

Interoperability

A key goal of ours is to play nice with folk that already have deep investment in operational areas - such as automation via Chef/Puppet/Salt, or monitoring via icinga/assimilator etc. We're ensuring we have clean interfaces that alternative implementations can be plugged into [e.g. you can use Chef/Puppet/Salt to do the in-instance configuration of a golden TripleO disk image].

Blueprints

Review team

Anyone can do reviews, but only the 'tripleo-core' team can approve them to land. We operate with the OpenStack standard two x +2's except in well, exceptional circumstances. Where multiple people collaborate on a single patch, one of the +2's must come from someone that isn't an author of the patch.

As a guideline, we follow the standard Review Checklist. We also have an expedited approval process for changes where there is general consensus and only minor implementation changes are needed. Details can be found in the TripleO Review Guidelines.

we don't use wishlist: things we'd like to do and things that we do wrong are both defects; except for regressions the priority of the work is not affect by whether we've delivered the the thing or not, and using wishlist just serves to flatten all the priority of all unimplemented things into one bucket: not helpful.

If you have an ubuntu mirror close to you, you'll probably want to set DIB_DISTRIBUTION_MIRROR to point at it. If you don't, or if you're more bandwidth-constrained than disk-space-constrained, you might find it worth your time to use apt-mirror to create a local mirror.

You'll almost certainly want to add the pip-cache element to DIB_COMMON_ELEMENTS in order to avoid re-downloading python requirements.

To save these settings, create a file called ~/.devtestrc - devtest.sh will source this file automatically. If you're running the devtest_*.sh scripts by hand, remember to source this before you source devtest_variables.sh:

Read https://wiki.openstack.org/wiki/Gerrit_Workflow. It's a lot simpler than it looks! You'll need to follow the workflow in order to get the above changes approved. Don't try to remember it all at once - it's simpler to keep refer to it as you walk through your first review.