Topics

Featured in Development

Understandability is the concept that a system should be presented so that an engineer can easily comprehend it. The more understandable a system is, the easier it will be for engineers to change it in a predictable and safe manner. A system is understandable if it meets the following criteria: complete, concise, clear, and organized.

Featured in Architecture & Design

Sonali Sharma and Shriya Arora describe how Netflix solved a complex join of two high-volume event streams using Flink. They also talk about managing out of order events and processing late arriving data, exploring keyed state for maintaining large state, fault tolerance of a stateful application, strategies for failure recovery, data validation batch vs streaming, and more.

Featured in Culture & Methods

Tim Cochran presents research gathered from ThoughtWorks' varied clients and projects, and shows some of the metrics their teams have identified as guides to creating the platform and the culture for high performing teams.

Q&A on the Book Continuous Delivery in Java

Key Takeaways

Even though Dave Farley and Jez Humble's classic Continuous Delivery book came out a decade ago, the authors still see many organisations struggling with some of the concepts. The truth is that implementing continuous delivery (CD) is hard.

Referencing the work of Steve Smith, the authors believe that CD is primarily about delivering value to customers, and continuous delivery is achieved when stability and speed can satisfy business demand.

By focusing exclusively on Java — the language and platform that the authors use day in, day out — the book, Continuous Delivery in Java, is able to not only cover the goals, mindsets, and practices of CD, but also actual tools, libraries, and configurations in Java.

When implementing CD, it is important to monitor the key metrics for high performing teams from the book Accelerate (by Nicole Forsgren et al) when implementing CD: lead time, deployment frequency, mean time to restore (MTTR), and change fail percentage.

Java developers obviously need to be involved with the set-up and running of a CD pipeline. However, expecting the development team to control all aspects of the delivery pipeline can be overwhelming.

The book Continuous Delivery in Java by Daniel Bryant and Abraham Marín Pérez was released late 2018, nearly ten years after the original "Continuous Delivery" book by Dave Farley and Jez Humble, and more than 20 years after Java's first public release.

InfoQ reached out to the authors to better understand from their experience why a book on Continuous Delivery specifically for Java and the JVM ecosystem was needed.

InfoQ: Nearly 10 years after the Continuous Delivery book first came out, why did you feel there was a need for a book specifically covering continuous delivery in Java?

Daniel Bryant & Abraham Marín Pérez: Even though Dave Farley and Jez Humble's classic Continuous Delivery book came out a decade ago, we have seen from our consulting experience that people are still struggling with some of the concepts. The truth is continuous delivery is hard. But even if people finally get what this practice is about, we felt many developers still had doubts about how best to put it into practice. In other words, we noticed a gap between the theory and the practice. We were keen to "stand on the shoulders of giants", and share our practical learning in this space.

By focusing on a particular subset of the programming population (Java developers), we were able to talk about continuous delivery in a more practical way: with actual tools, libraries, and setups. Granted, the options that we chose are not the only possible ones, but by carefully curating this particular set we help readers hit the ground running when they embark on their journey towards continuous delivery.

InfoQ: What are the main takeaways your expect readers to take away from the book?

Bryant & Marín Pérez: One key message that we've been very careful to deliver is that there are no right or wrong decisions, only more or less fitting choices. Every organisation is different, and with different needs come different solutions. We were also keen to enable Java developers that are potentially not familiar with the configuration or operational side of delivering software to recognise key metrics, identify important decision points, and learn how to make the best tradeoffs when embracing continuous delivery.

In our book we have tried to describe what the general practice of continuous delivery entails, what the main challenges are, and an array of possible measures that can be taken to address them. Referencing the great work of Steve Smith, we believe that continuous delivery is primarily about delivering value to customers, and continuous delivery is achieved when stability and speed can satisfy business demand. We point out a bunch of useful metrics that can be tracked to allow developers to convince management of potential issues and bottlenecks, and also demonstrate improvement as they adopt some of our recommended practices.

As we've mentioned above, we also hope that the reader will also walk away from the book with a list of Java-specific technologies or tools that they can experiment with when implementing continuous delivery practices. We often get emails or Twitter DMs asking us for technical recommendations, and so we have captured all of our favourite tools here.

If you are looking for an overall takeaway, then a summary message is that continuous delivery is an introspective and iterative exercise: at the end of the day, you must analyse your own practice, find your pain points, and iterate to improve your process. Continuous delivery implies continuous improvement.

InfoQ: In the book you talk about the evolution of Java. Could you briefly summarize the main changes in the last decade and how that affected (or not) how to think about doing Continuous Delivery in Java?

Bryant & Marín Pérez: One of the main benefits of continuous delivery is a shorter time to market: by building an organisation that can effectively introduce stable changes on a continual basis, we make an organisation that is more responsive to change. One of the traditional complaints against Java has always been its verbosity, and detractors that abandon Java typically do so in favour of more expressive languages where they feel they can be more productive. The evolution of Java in recent years has gone, mostly, to address these concerns. Features like lambdas, local variable inference or pattern matching have been available in other languages for many years, and they’re now slowly making their way into Java.

This focus on language features has meant that some important changes in relation to the packaging and runtime of modern software were missing for quite some time in Java. For example, the desire for developers to minimise the size of the deployment artifact has only been made possible by the modularisation work introduced in JDK 9, and the adoption of this is still an ongoing project. And as another example, the JVM only fully acknowledged resource limits when running within a container approximately two years ago in Java 10 (and the fixes required have been backported to JDK 8u131). The result of examples like this has meant that Java adoption on some newer cloud and container platforms, or the ability to use some of the innovative delivery pipeline technology, has been delayed. There are a lot of related tools for packaging, testing, and optimising runtimes out there now, and we have tried to point them out.

It’s also worth noting that the recent switch to releasing a new major Java version every six months, as opposed to the previous two to three years cadence, is also aimed at delivering change more frequently. After all, the people behind Java are users themselves who also want to benefit from continuous delivery, and they’re making every possible effort to lead Java in this direction while keeping the stability that brought it to fame in the enterprise. If developers do want to rapidly adopt the performance gains or new features exposed in each of the new versions of Java (even if they are only adopting the LTS releases), then having a delivery pipeline that automatically tests your applications will be very useful.

InfoQ: Jez Humble said once in a talk that without a modular system architecture "you're going nowhere". In your experience as practitioners, how important is a decoupled architecture for effective Continuous Delivery?

Bryant & Marín Pérez: It's absolutely key. Being able to independently deploy new functionality within modules or components of a larger system provides many more options for increased speed and stability. Continuous delivery implies continuous change, and with every change there is always the possibility of mistakes. We are also increasingly building complex (adaptive) distributed systems, which can be challenging to understand, and they don’t fail deterministically. When you are operating within a complex system or with a high rate of change, failure is not a matter of "if", is a matter of "when". Once you accept the certainty of failure, you start planning for mitigating measures. This is the point where decoupling comes in: by creating clear boundaries between parts of your system, you create firewalls that can limit the impact of a mistake.

In our book we decided to focus on the microservices architectural paradigm also for this reason: you can get many of the benefits of continuous delivery if you have a well-architected, modular monolith, but that won’t give you an isolation boundary at the point of deployment. The design and implementation of microservices are difficult to get right, but they can give you a modular system with isolation both at compile time and at runtime.

InfoQ: Following up on that, would you say microservices architectures are always the best fit for Continuous Delivery?

Bryant & Marín Pérez: We prefer to be cautious with "absolute" statements like this. Microservices architectures bring a lot of benefits, but they also have many drawbacks. A monolith is incredibly simple to run, and that's not something we should disregard so quickly: simplicity has its value too.

When we write code, we generally prefer not to overly future-proof the implementation, and instead refactor as we deem fit. We prefer to apply this philosophy to architecture too. Maybe start simple, with a monolith, and the break that monolith off into microservices as you feel the need. In other words, a microservices architecture might not always be the best architecture to start with, but it tends to be a good choice to evolve into. For readers that want to learn more about this, we definitely recommend Sam Newman's new book Monolith to Microservices.

InfoQ: What are some of the key non-technical aspects that can make or break successful adoption of Continuous Delivery?

Bryant & Marín Pérez: From our experience consulting with clients, the main non-technical differentiator is the involvement of the business in the practice of continuous delivery. The traditional approach sees IT in general as a cost center, and the business typically doesn’t get involved with it. They manage software in "projects" and "programmes", assuming they are time-bound activities with one specific deliverable at the very end of it. Continuous delivery implies a holistic approach with constant engagement, and the business cannot afford to simply "leave it to the techies".

Without the involvement of non-technical staff, continuous delivery cannot work.

InfoQ: Do you recommend any metrics or indicators for teams and organizations to measure the progress of their Continuous Delivery initiatives?

Bryant & Marín Pérez: This is a very interesting area, and one that continues to evolve as we increase our understanding of what continuous delivery entails. The seminal reference in this space is the book Accelerate by Nicole Forsgren, Jez Humble, and Gene Kim, and here the four key metrics that correlate with high performing teams are: lead time, deployment frequency, mean time to restore (MTTR), and change fail percentage. You definitely aren’t wasting your time by tracking any of these metrics.

Ultimately, the best business metrics will depend on the actual organisational needs: Continuous delivery is focused on delivering value faster, which means that organisations need to identify what they (or their customers) value, and then define metrics that measure how much value is being delivered and track this over time.

InfoQ: How much autonomy should a Java development team have over their delivery system? Should they only have to worry about the actual pipeline definition and execution? Or actively participate in setting up and running the pipeline tooling and underlying infrastructure?

Bryant & Marín Pérez: This is potentially a contentious topic. Java developers need to be involved in the set-up and running of the pipeline because they clearly have a stake in this. For instance, they'll want to be able to update the JRE without much fuss, maybe by working with Docker containers and keeping control of the Dockerfile definition. However expecting Java developers to control all aspects of the delivery pipeline can be overwhelming, both in terms of the necessary skills and the time that it takes to look after the pipeline.

At the other extreme we have organisations that create dedicated platform/DevOps/SRE teams that manage all aspects of the pipeline setup; this gives you a team of dedicated, skillful professionals and allows you to share the burden across teams (and sometimes the same infrastructure can be shared by an entire company), but bears the risk of this turning into a silo. The sweet spot is probably somewhere in the middle: DevOps engineers within the team or very closely aligned to the team, but part of a cross-team "guild" (to use Spotify nomenclature) where they can join efforts and share experiences. If you want to learn more about team design, and the impact this has on the delivery of software, then we recommend the excellent new book Team Topologies by Matthew Skelton and Manuel Pais.

InfoQ: In the book you cover different approaches in terms of the underlying CI/CD platform, from on-prem to cloud IaaS, PaaS, Kubernetes and even Serverless. What are some of the key factors teams should consider when selecting their platform of choice?

Bryant & Marín Pérez: The main factor that we've seen affecting this decision is appetite for vendor lock-in and the related desire to build and manage a team of platform specialists.

On one side of the spectrum, on-prem is the hardest (and probably most expensive) option to implement, but it's the one that gives you the most freedom: you have the ability to choose everything from the hardware and the OS, to the orchestration platform. You will typically need a large team of specialists to run this, though.

On the other side of the spectrum, modern "serverless" and function as a service (FaaS) platforms give you the least choice and makes migrating away harder, but these provide a lot of core functionality "out of the box" that it allows you to focus on your business without worrying too much about the scaffolding.

It is not uncommon for organisations to change their position on this matter as the organisation matures and their needs evolve. Running a hybrid platform is also increasingly the reality for large organisations.

InfoQ: When the "Continuous Delivery" book came out in 2010, automated deployments were not the norm yet. Do you feel like observability is also at that early adoption stage? And what are the implications of implementing observability from the perspective of the application’s pipeline, before it’s actually deployed live?

Bryant & Marín Pérez: We do see more and more organisations realising the importance of observability, but by being early adopters many of them are not getting as much benefit as they could from it (yet).

Observability is gaining relevance because it’s one of the basic steps to increase resiliency, which in turn is more and more a basic need of the modern economy. Users expect problems to be solved quickly, typically before they become noticeable, and for that you need to have deep, real-time knowledge of your system. Observability gives you that.

The main challenge we see companies facing when trying to implement observability within a continuous delivery pipeline is that oftentimes they don’t know what kind of operational metrics are relevant to their particular case until they experience live issues, so there is a lot of guesswork in place. This is not necessarily a bad place to start, but it needs a continuous review process to make sure the chosen metrics are relevant.

InfoQ: Finally, how do you see the evolution of Continuous Delivery in the Java world and what do you expect organizations future state-of-the-practice to be compared to today's?

Bryant & Marín Pérez: We believe the main changes that we will see will happen around the way that the business sees software development. Continuous delivery will make concepts like a "hotfix", or the dichotomy of "Project vs. BAU" obsolete. Change will become the norm, and businesses will begin to embrace it instead of resisting it.

In regard to Java, the subtle change that we will probably see is that organisations will adopt new versions of Java more quickly (adoption of newer versions has traditionally been Java’s Achilles' heel). The faster adoption of newer versions, together with the shorter release cycles that we are already seeing, will hopefully create a spiral of innovation that will take Java to new heights. It’s challenging to foresee all of the new ideas that will come out of this, but we’re really looking forward to it.

About the Book Authors

Abraham Marín Pérez is a Java and Scala developer with more than ten years of experience in industries ranging from finance to publishing to the public sector. He also helps run the London Java Community, and provides career advice at the Meet a Mentor London group. Abraham likes sharing his experiences with others, which has led him to author "Real-World Maintainable Software" (O'Reilly), and co-author "Continuous Delivery in Java" (O'Reilly). Currently based in London, Abraham likes going out on a hike whenever the English weather permits it, and cooking when it doesn't.

Daniel Bryant works as an Independent Technical Consultant and Product Architect at Datawire. His technical expertise focuses on 'DevOps' tooling, cloud/container platforms, and microservice implementations. Daniel is a Java Champion, and contributes to several open source projects. He also writes for InfoQ, O’Reilly, and TheNewStack, and regularly presents at international conferences such as OSCON, QCon and JavaOne. In his copious amounts of free time he enjoys running, reading and traveling.