CI/CD: The driving force behind DevOps

DevOps was not created solely on the idea that developers and operations should play nice together. DevOps is the cultural transformation organizations go through on the road to modern application delivery. The end goal is the ability to release high-quality software more frequently. DevOps enables this by promoting communication and collaboration.

“Today, teams are supposed to make changes a lot faster. Your chances of doing that without developers and ops people working together successfully are pretty much zero,” said Paul Stovell, founder and CEO of Octopus Deploy.

The driving force behind DevOps, the force that relies on that cohesion and brings it into the modern way of developing and delivering software, is the CI/CD pipeline.

“While DevOps speaks more to the organizational transformation that takes place in companies that are undergoing digital transformations, the CI/CD pipeline is the engine that drives DevOps success—and helps them deliver better apps, faster. It’s the core business process by which companies transition from manual, monolithic application delivery to automated, modern application delivery,” said Jeff Scheaffer, general manager of continuous delivery at CA Technologies.

The CI/CD pipeline is a term that often comes up when talking about DevOps because it was designed to create a bridge between teams and help them see the bigger picture, according to Sezgin Küçükkaraaslan, co-founder and VP of engineering at OpsGenie. That bigger picture is getting software out and to customers.

“CI/CD pipelines are the fastest way to production. They enable devs to easily build, package, integrate, test, and release code by automating manual, error-prone steps,” he said.

While DevOps focuses on the culture, CI/CD focuses on the process and tools necessary to help teams adapt to a culture of continuous everything, Küçükkaraaslan added.

The CI/CD pipeline is a key enabler of DevOps because it removes the friction within the DevOps process so that changes can happen more quickly and go to production faster, according to Octopus’ Stovell. The more friction you remove, the faster the cycle happens, he explained.

“It means you are moving the business forward and creating this beautiful feedback cycle and a continuous improvement environment,” said Stovell.

How to keep the pipeline flowing for youOne way to think of the pipeline is as a ecosystem or lifecycle, according to Dan McFall, president and CEO of Mobile Labs. “It is actually a loop back in upon itself to be re-released. That is what we are talking about in the pipeline. It is a continuous run of writing code, testing code, deploying code, retesting it in production and then completing the feedback to the whole process again. It is the ability to release with confidence and keep running everything,” he said.

Thinking about the pipeline more broadly will allow you to see the entire lifecycle of not just getting software into production, but how it gets there and what happens to it afterwards. That way if things fail, you can see what went wrong and recover easily, according to Octopus’ Stovell.

The most important aspect in the CI/CD pipeline is the C, which stands for continuous. In order to be successful at CI and CD, you have to have “the ability to constantly move without having to halt everything,” said Patrick Poulin, founder of API Fortress.

The pipeline consists of a release stage where you understand what you are creating, then testing stages, a pre-production stage, deployment stage and then ultimately production. Of course, this is an oversimplification of the pipeline, according to Robert Stroud, chief product officer for XebiaLabs, but the idea is to move through these stages or approval points in an automatic fashion. “One of the opportunities in the industry at the moment is there are a couple of hand-off points where we hand off to the development team, testing team, staging team and then the deployment team,” said Stroud. “The real opportunity for velocity is automation across all those steps and stages.”

It is visualized as a pipeline because changes ideally flow from start to finish one after another, according to CA’s Scheaffer.

At a high level, the pipeline “includes compiling, packaging, and running basic tests prior to a code base merge. After your code is in the base, the main branch of your version control software, additional tests are run to ensure your apps work with real configuration and other services. Performance and security tests are also run at this point. From here you deploy code to staging and then to production,” said OpsGenie’s Küçükkaraaslan

The best way to keep the pipeline working for your business is keeping it simple, visible and measurable, according to CA’s. Key factors here include automation and orchestration of the pipeline, improvement, alignment with all stakeholders, and ability to assess what good looks like.

“DevOps allows you to make progress in more incremental and manageable chunks. It gives you the ability to have more confidence when software is ready and that you are truly delivering the right thing,” said Mobile Labs’ McFall.

Beware of kinksYour pipeline should be fluid where stages are occurring simultaneously. “For example, testing is not waiting for development to complete writing code to start the testing process. Instead, testing occurs in tandem with development—continuously testing smaller chunks of code in parallel with development,” said CA’s Scheaffer.

Scheaffer explained the pipeline is like a fiber optic cable containing many stages of glass fibers. “Each glass fiber can represent the workflow of an individual application, but you will likely have many different applications moving through your pipeline, and you are coordinating releases for multiple strands,” he said.

However, having a bunch of moving parts happening at once can easily introduce complications and complexities. Don’t let your pipeline become a bottleneck. OpsGenie’s Küçükkaraaslan suggested DevOps teams constantly be monitoring all of their components to ensure they are uncovering problems and addressing them as soon as possible.

In addition, he explained teams should keep a close eye on test performance. “It can be tempting to rush new code into production, but it’s very dangerous to do this without the right testing. The complexity of systems’ interdependencies means there are no limits to what can go wrong. Monitoring how new code performs in a test environment is essential to releasing stable builds. Try to find the optimum balance between fast tests and tests running against an environment that simulates production,” said Küçükkaraaslan. XebiaLabs’ Stroud explained having good testing suites and good test coverage are key to knowing what has been deployed, where and when.

The CI/CD pipeline isn’t something you can buy in a box, Küçükkaraaslan explained. That means the pipeline is going to evolve over time, and teams and businesses need to be able to evolve with it.

“We continuously work hard to improve our CI/CD performance, and embrace good practices. This means we are always gathering feedback from everyone involved in delivering new features in order to identify additional improvements. As our business evolves, so must our CI/CD processes,” he said.

If your CI/CD pipeline doesn’t create a feedback loop, then you are not really doing DevOps, Mobile Labs’ McFall explained. “The benefit of doing this is it allows you to do things a little more quickly with high confidence and more education,” he said. Just because teams can now push everything to production, doesn’t mean they should always do it. That’s where the feedback loop comes into play because it ensures you are always listening to customers and not just doing this to push out new things. “You need to stay abreast of the continuous notion of best practices out there. Be aware of what your peers are doing, where their success is, and if there are opportunities to not make the same mistakes others have,” said McFall.

Stay away from quick fix approaches, said CA’s Scheaffer. “With anything in life, the quick win is tempting but can come with a price. In CI/CD pipelines, this shows up as technical debt that manifests as plateaued progress, the inability to engage other teams, and an inconsistent understanding of what ‘good’ is within an organization. The result is too much rework spent rebuilding the pipeline process,” he explained. McFall believes organizations constantly try to find a one size fits all approach to tooling, when what they really need to do is find out what makes sense to their business and risk profile.

Leverage automation as much as possible, according to Stroud, but don’t fall into silos of automation. Stroud explained a common problem in the pipeline today is having silos of automation where organizations aren’t talking across all departments yet. “One of the pivotal rules in DevOps is that we need to have consistent collaboration across the toolchain, not having that is one of the biggest traps for young players right now,” Stroud explained. Organizations can address this by standardizing and rationalizing the tools they use in each of these silos.

Lastly, don’t fall into a blame culture. When something fails it is not so much about asking the who, what, where and why but rather how can you do better, how can you drive velocity, and how can you deliver better outcomes, according to XebiaLabs’ Stroud. “You want people to experiment and use trial and error to learn. This has to be a basic tenet of DevOps rather than having deep post mortems of who we can blame after a piece of change is deployed and doesn’t meet requirements. Use that experience or feedback to learn from it and change your processes in the future so you can continually drive value,” he said.

Continuous delivery vs continuous deploymentWhile CD is mostly commonly known as continuous delivery, many organizations are beginning to think of it in terms of continuous development, according to Robert Stroud, chief product officer for XebiaLabs.

What is happening today is that software changes are beginning to get smaller in size, nature and incremental differentiation. This change is enabling teams to get to a point where they can be automatically deployed. “The reality is where we are actually going to be ending up is in a situation where we are deploying at mini rates. Change is happening instantaneously. Maybe on a weekly basis or in some organizations they collect and group the change and deploy it on a monthly basis. It depends on the business and the business appetite for transition,” said Stroud.

In order to keep up with the change, teams need to be practicing good deployment methods such as canary releases where changes are rolled out to a small sample size of their audience at once so the change can be validated, or a blue-green deployment where the release is staged in a manner that allows for various parts of the audience or customers to receive the change in a controlled manner. This also enables feedback to the developer so they can make sure what is being delivered was actually desired.

A common mistake when it comes to deploying software is that teams will compile the code, deploy it to a test environment and when it is time for production they will compile the code again and deploy it to production, said Paul Stovell, founder and CEO of Octopus Deploy. “That is a bad practice because a lot things can sneak in when you are building it a second time. You have no guarantee that your test is really what is going into production,” he said. The right way to do it is to build once, keep a copy of the build and files that came out of the build process and then deploy it to test and to production.

The other way to successfully achieve continuous deployment is to have a consistent process for each environment. “The best way to guarantee the production deployment is going to work is to make sure the exact process you run into production is as close as possible to every other environment,” said Stovell.

A higher level view of the deployment pipeline is known as a DevOps Assembly Line, according to OpsGenie’s Küçükkaraaslan. “The challenge is that the DevOps toolchain is not as fully developed as what is available for CI/CD, and involves human dependencies that can be inefficient. The DevOps Assembly Line attempts to connect activities into event-driven workflows that can then be associated with specific applications or services,” he said.

Implementing an API-led DevOps approach If you think of the CI/CD pipeline as a hose that is constantly running to supply DevOps, API testing is a kink that is slowing down the flow, according to Patrick Poulin, founder of API Fortress.

API testing has become a pain point in a DevOps development process simply because it has been ignored up until now. “It is one of those things that people have been procrastinating on because maybe they either haven’t had a tool out there that makes it easy or because it requires a lot of development work,” he said.

If teams aren’t testing the APIs, either they won’t catch when an error happens or it will take them weeks to even discover the error because unless an API is entirely down, they won’t see it. “It ends up being an expensive error that can last for because because the teams are just not comprehensively testing it,” said Poulin.

DevOps teams need to be putting in the same level of effort into API testing as they are into automating the testing of websites and apps. “APIs are just as critical if nore more critical than the front end,” Poulin said, “APIs touch every part of the company and therefore being able to have insight as to the testing, the reliability and the uptime of them should be available.”

Teams should provide complete coverage of all their APIs, not just testing a single endpoint, but also creating integration test, which is a multi step task that reproduces common user flows. It is not only important to know how one feature works, you need to know how it works when coupled with a bunch of other features or processes, Poulin explained. “When you test everything out in a similar way to how a real world user would experience it, then you start seeing the cracks in between pieces,” he said.

The key to all of this is to find a tool or platform that enables everyone from the CEO all the way down to the developer to access answers to questions like are my APIs up? Was there an API issue yesterday? “If you have the right tool in place, anyone can get those answers in just a few clicks and get full understanding into the health of their API program,” said Poulin.

Bringing infrastructure to the pipelineThere are two value proposals when it comes to DevOps and the CI/CD pipeline, according to Goran Kimovski, chief technology officer for cloud management solution provider TriNimbus. The first principle is constantly integrating code changes and testing them against a set of defined test criteria. This is a very attractive proposition because you are constantly making sure the code works, according to Kimovski. The second principle is often harder to achieve, and that is accelerated adoption. The application code a developer writes needs to be deployed to some kind of infrastructure, but this process is typically associated with manual and heavy-handed change manage processes, Kimovski explained.

“The software industry has had a lot of years to develop things like unit testing and concepts like automated integration testing, end user testing, performance testing and what not,” he said. “The whole concept of testing infrastructure is still in an early stage where it is inherently difficult to test code.”

According to Kimovski, that is because developers need to provision and deploy an infrastructure. The traditional scenario of doing this includes actual hardware or a virtual environment that involves traditional networking engineers, IT operations and system admins to understand the technology, put the hardware together, set up the networking, create a virtual machine and hand it over to developers or QA to configure and then to someone else to deploy the app. “It is being handled through multiple people, and relies on humans to do a lot of manual labor. It is not stable or repeatable,” said Kimovski. Not only is this time consuming, but it introduces outside costs, he added.

To successfully implement the CI/CD pipeline for both infrastructure and application code, Kimovski suggests utilizing the cloud. “Provisioning a server in the cloud, running it for 15 minutes and shutting it off means you only pay for those 15 minutes,” he said. “Provisioning a server in some datacenter somewhere means you have to have the physical hardware, so it is an additional cost to your business.” In addition, using physical hardware for testing the infrastructure code provides limitations because not everyone can access it. The cloud democratizes that and moves it into an operating expense, he explained.

About Christina Cardoza

Christina Cardoza is the News Editor of SD Times. She is responsible for the oversight of the daily news published to the website as well as the company's weekly newsletter, News on Monday. She covers agile, DevOps, AI, machine learning, mixed reality and software security. She is an undeniable nerd who loves Marvel comics and Star Wars. On Follow her on Twitter at @chriscatdoza!