Over the last 10 years, continuous delivery has become one of the most discussed development practices. Making software updates rapidly, as soon as they are ready is the theory. While this once sounded like magic, the breakthroughs in automation engineering make frequent updates possible.

The idea of continuous delivery was developed and shared by Martin Fowler, Chief Scientist in ThoughtWorks. He described the background of the practice this way: “Every developer integrates with everybody else’s work frequently. […] And, if something has failed, nobody has the more important job than fixing the backstage”.

So, let’s explore continuous delivery and integration in technical, operational, and business terms.

Evolution of development practices that led to frequent software updates

There are three main approaches to software development: waterfall, agile, and continuous delivery. All three are used in software engineering. But as methods and tools have improved over time, we can consider that these three have evolved into one another.

Waterfall. All development stages from planning to production deployment and maintenance follow each other. Waterfall has proven its inefficiency for products where constant updates are needed. It hasn’t allowed for quick reaction to customer feedback quickly because of lengthy development cycles. Today waterfall is mostly used for short, fixed-cost projects, that won’t have enough time to age before the release date.

Waterfall has a conservative, non-flexible nature

Agile. Agile is a philosophy defined by core iterative development. There are many agile methods, but most of them entail short engineering cycles that include all main stages: planning, development itself, testing, and deployment. Each cycle takes one or two weeks. The idea behind Agile is shipment of the product as quickly as possible and incrementally update it based on customer feedback. Agile methods remain the mainstream in modern software development as they support product adaptivity to the constantly changing market and customer needs.

In agile development, each iteration takes 1-2 weeks

Continuous delivery. You can find multiple interpretations of continuous delivery. We consider it an evolutionary development of the Agile principles. The method doesn’t require short release iterations and simply allows the commitment of new pieces of code when they are ready. This way, developers can update the product multiple times per day, continuously delivering the value to users. This is achieved by a high level of testing and deployment automation.

In CD, builds are created multiple times per day

Business value of continuous delivery

The main idea behind continuous delivery (CD) is to have any update ready for release at any given moment. This is simply is what makes the practice so different from traditional Agile methods like Scrum, where iterations are 1-2 weeks long, possibly requiring each new feature to wait before being released in production.

In 2014, Mark Warren, European Marketing Director in Perforce said that 65 percent of software developers, managers, and executives practice continuous delivery in their companies, while 28 percent report using it across some of their projects. Eighty percent of respondents were working on software-as-a-service (SaaS) products.

The ubiquity of continuous delivery adoption is likely to be higher now, as we see increasingly more software providers embarking on the SaaS model. Cloud-based services are the main field for the practice adoption as these products can receive customer feedback immediately and just as immediately respond with fixes and updates.

So, what are the main reasons to consider CD?

Time-to-value proportion. The velocity of development is high and the time gap between proposing a new feature and its delivery is significantly reduced. So-called “integration hell” is mitigated in this case. The team spends less time on debugging and more time on developing new things. This also means a shorter feedback loop, the time between user interaction and updates.

Maximum automation. Continuous delivery is only possible when testing and deployment stages are automated. Thus the only time-consuming aspect is the programming itself. We’ll talk about technical details in a minute.

High quality and low risk. As every update undergoes several stages of automated verification for deployment, the number of possible mistakes and bugs is significantly reduced.

Data-driven management. The strategy also allows for constant monitoring of data related to development. You get visibility on your processes and eventually insights on improving your existing workflow and eliminating engineering bottlenecks.

Reduced cost. One of the main disadvantages of long release cycles is the cost of a mistake that keeps on growing as a bug stays in production. If it remains after multiple updates, the cost to fix it starts growing exponentially. Continuous integration reduces this cost by revealing bugs as early as possible.

Adoption framework: how to approach continuous delivery

In order to introduce the method in your development workflow, your software engineering team has to adhere to a set of requirements that make the approach possible:

Following the core principles of the continuous delivery method: continuous integration and deployment

A software engineering infrastructure that connects all product delivery aspects into a unified ecosystem

Project repository with a minimum number of code branches

Automated test outweigh the manual ones

Use of production environment clone that exactly mimics real-life conditions

Continuous delivery is more complicated than these five aspects, but they define whether your organization is able to apply the practice. Let’s discuss these requirements in more detail.

1. Continuous integration and deployment

While CD defines the methodological business principle, continuous integration (CI) describes how this principle is implemented on the software engineering level. In other words, it dictates the practice to the development team:

developers make code commits multiple times per day

each code piece passes through a set of automated tests to detect every error and bug as early as possible

the major priority for the development team after the issue is detected is to remove it.

Continuous deployment is another CD principle meaning that every validated feature can be deployed into production at any given moment. But continuous deployment, unlike CI, isn’t always necessary for CD. In the words of Carl Caum from Puppet, “Continuous delivery doesn’t mean every change is deployed to production ASAP. It means every change is proven to be deployable at any time.”

The ability to deploy updates immediately mostly fits SaaS models, where developers have full control over the product that an end user faces. If you have client software, which is installed on users’ devices, it’s more common to bundle these updates and let users know that some changes are going to be applied. For instance, Facebook can practice continuous delivery in their development workflow, but your mobile app will ask permission to update itself.

2. Continuous integration infrastructure

There are many CI-systems that provide software engineering infrastructure to implement the practice. Although you can build custom software for CI workflow, there are a number of off-the-shelf solutions. The most popular ones are CruiseControl, Atlassian Bamboo, and TeamCity. These systems are basically testing and building environments that unify the entire development into the end-to-end process. They track new code commits and allow for writing and integrating automated tests that will run every time a new code is created. How does this work?

New code monitoring. Every time a new piece of code is committed to the repository (code storage), the CI-system detects this change.

Packaging. The new code is automatically packaged for tests.

Automated testing. Packages go through a number of automated tests validating that this new code works properly.

Deployment. Once packages are accepted, the CI-system deploys the update on a production server. If deployment itself isn’t automated, the system ensures that packages are ready for production deployment.

Due to high automation, this process can occur multiple times per day, revealing bugs earlier than if developers had been committing big chunks of code.

3. One mainline of the project in repository

In layman’s terms, a repository is a storage facility for keeping and managing development tools. In continuous integration, this repository should be the “home” for all written code of a particular project. Also, the repository keeps test scripts, third-party libraries, and other things used in the development, i.e. – everything needed for a build.

To get all benefits of such an approach to deployment, the number of code branches in the repository should be reduced to a minimum. Traditionally, code branches from multiple developers merge into the main every so often, depending on the update scope. In CI, the software environment maintains the minimal number of branches and continuously sends the new pieces of code directly to the master branch providing quality assurance on the go.

The more code branches and versions of the same product you have, the higher the chances of conflicts between them.

4. Testing automation

Continuous integration fully depends on automated tests. They verify whether the new code works properly. And if it does not, the system takes the build down. What is the point of this approach? Developers won’t be able to continue working with the build until bugs are removed.

Although full test automation is not a must, CI is worth adopting only if the number of automated test cases exceeds that of the manual ones. Fortunately, CI systems allow for integrating a large variety of automated tests, ranging from smoke testing to understand whether your product will simply launch after updates to security and performance tests. On top of that, your QA engineers can opt out of multiple programming languages to write these test scripts.

5. Production environment clone for testing

The polygon – an environment for testing – should be represented as a precise copy of the production environment. This means that you have to run tests having the same databases, infrastructure, patches, network structure, etc. In other words, you must do everything to fully understand how the production version will look and test it on different browsers and devices automatically.

The main adoption challenges

According to the Perforce data, the adoption of continuous integration and continuous delivery is seen as a long process by most companies. In 2014, 53 percent of them said that it would take 12 months, while 85 percent agreed that it would take less than two years. Anyway, the adoption of this approach is a long-term strategy that encompasses a number of challenges to overcome.

The cost of automation

While CI-based development reduces the cost of mistakes and productivity, its adoption requires substantial budget and time investment, the lion’s share of which is the requirement to hire and retain QA automation engineers that will be incrementally covering your evolving product with automated tests. If you’re starting from scratch, the groundwork for QA automation may take anywhere from 6 to 18 months.

Microservices architecture

In a nutshell, the microsevices or components architecture is a software pattern where different functional elements are decoupled and can be shipped independently from each other. The opposite is the monolith type of architecture where functional components are deeply tied with each other and every change impacts the overall code base.

If your software product is a large monolith, it’s not a death sentence to continuous delivery. But it will make life for developers working within continuous delivery logic harder. The monolithic code might not be well understood and it becomes more demanding to reconcile the work of multiple development teams each working at its own pace.

Embracing DevOps culture

Your developers won’t be able to make relevant and user-oriented updates without frontline operations workers. DevOps culture entails close collaboration between software engineers, the dev part, and operations workers, the ops part. The latter stands for all people involved in the after-launch period of a product life cycle. These are systems engineers, administrators, operations staff, network engineers, security specialists, etc. Building DevOps culture means that these two sides work as a single unit to enable a fast feedback loop, from users through operations to developers.

According to the 2017 InformationWeek survey, only 18 percent of tech organizations have fully adopted DevOps, and 32 percent plan to do this in 12 months. These statistics highlight two things: 1) A limited number of companies work on DevOps; 2) Mere DevOps use doesn’t necessarily mean that the corporate engineering culture works this way.

Integration challenges

This is a minor challenge among the others, but sometimes development teams face difficulties adopting new software – such as CI and version control systems – and integrating them into their old workflows. Luckily, there are many easy-to-use solutions on the market, such as DevOps products or TeamCity, which can be chosen on the basis of team preferences, its scale, and expertise level.

Frequent deployment

Even though developers are used to dividing their work into stages, it’s often hard to follow such modularity. Builds can be of different scope and complexity, and your engineers may not have enough agility to think and code the modular way. If they have been building monoliths for years, it will take a tangible effort to retrain engineers to work the new way.

Focus on bugs

Frequent commits and the necessity of immediate fixing may cause development delays. If a new feature is proposed and it can’t pass testing, everyone is concentrating on removing the bug, even if this feature is not a priority.

Production environment capacity

Your production environment polygon will have to handle many operating systems, browsers, and devices. And it becomes cost-sensitive to achieve the necessary capacity. For example, each Firefox browser update takes about 200 work hundred hours of a single CPU to run all tests. Even if you aren’t Firefox, this means that you don’t want to test your product on a laptop. The good practice is to use cloud providers like Amazon Web Services, Google, or Microsoft that will provide enough computing capacity to make testing really rapid.

Organizational culture

Forty percent of CloudBees’ report respondents agreed that the organizational traditions and the fear of tangible investments are the major barriers to continuous integration adoption. As mentioned, the automation, skilled QA-engineers, and the changes in a traditional workflow structure require time and financial investments, and some companies doubt it’s worth it.

Continuous integration as a way to deliver value

While challenges are an inevitable part of every innovation, the success of the innovation is defined by the success of companies who have already adopted it. The largest players in the IT-industry – Amazon, Google, and Facebook – adopted continuous production in their development a long time ago. The principles of constant and continuous testing ensure the stability of Google services, quick updates at Facebook, and LinkedIn financial success after implementing CI- and CD- practices.

Continuous integration is a way to perform stable and frequent deployment of high quality. As Martin Flower said, “Frequent deployment is valuable because it allows your users to get new features more rapidly, to give more rapid feedback on those features, and generally become more collaborative in the development cycle”. Continuous integration is a way to remove the wall between companies and their customers, creating client-oriented, useful software.