Continuous Lifecycle Conference 2017

post

A couple of weeks ago, I went to the second day of Continuous Lifecycle, a conference about all things Continuous Delivery, Continuous Integration and Continuous <insert word here>.

I’ve written up some notes of the talks that I went to below - hopefully you might find some of them useful / interesting and sorry if any of them aren’t clear. I thought the conference was good overall and fairly well organised. I’d recommend it to anyone interested in DevOps stuff!

Serverless is supposedly the Next Big Thing™ - containers were the last Next Big Thing™ and machine learning is the next Next Big Thing™.

The serverless idea combines functions as a service (e.g. AWS Lambda) with backend as a service (e.g. DynamoDB) to create full applications.

AWS lambda is one provider with IBM Openwhisk, Azure Functions and Google Cloud Functions providing their own alternatives.

Serverless is supposedly cheaper as you only pay for execution rather than paying to keep the server running. He used this to show some prices.

He showed an example serverless app with the static assets hosted in an S3 bucket which called a Lambda to get some data.

He talked about how you can have different environments (e.g. prod, test, dev) by using different versions of the same Lambda (e.g. v6 = dev, v5 = test, v4 = prod) with the config being stored in environment variables, S3 or DynamoDB.

Having all your logic in lambdas makes it harder to test due having to mock so many other lambdas. Integration testing sounds better but would cost money.

He then showed an example of a CD pipeline using AWS Lambda and S3.

Each build step is a single Lambda with S3 being used to pass zips of the code between the different steps (and also trigger subsequent steps).

My thoughts: The different versions for different environments thing seemed mad and is also against AWS best practices of having different accounts for different environments. I’ve used Lambdas before for thumbnail generation on uploading an image which was neat but using it for your CD pipeline sounds crazy.

openQA is used by openSUSE TumbleWeed (which releases every 3-4 days when it passes ~400 test cases) as well as by openSUSE Leap and Red Hat.

My thoughts: I thought this talk would be interesting as I’ve been working on a lot of QA stuff at LBG but it was quite specific to OS testing. Their test runner is quite clever as they test against lots of differing Architectures and setups but the tests themselves are roughly on par with Wraith (image diffing)

Just like a regular CI pipeline except your build step produces a docker image (analogous to a jar file) which is then hosted in a private docker registry (analogous to nexus).

Removes the need for Chef, Puppet or Ansible to setup your servers - all you need is Docker.

You can also build in Docker on Jenkins (using a docker image with all the build tools bundled) rather than installing the build tools on your build slaves.

Then extract the built artefact to a new docker image for deployment.

Can use kubernetes as container runner for hosting the resulting image for acceptance tests.

Docker isn’t all bells and whistles though - you can have security issues e.g. from uncontrolled upstream docker images.

My thoughts: It sounds like it would be reasonably easy to convert from an existing CI pipeline to one that uses Docker. The main benefits would be ease of setting up new servers and updating existing runtime environments. Downsides are security and Docker images being big.

Pokemock takes your YAML OpenAPI spec and provides a mock implementation of it (Drakov is a mock server which uses API Blueprint specifications.)

My thoughts: I’ve used Swagger before but only for generating documentation for an API. Specifications, and the tests and mocks that you can create from them, sound useful if you’ve got an API consumer building their app at the same time as you build the API.