Can you release new features to your customers every week? Every day? Every hour? Do new developers deploy code on their first day, or even during job interviews? Can you sleep soundly after a new hire’s deployment knowing your applications are all running perfectly fine? A rapid release cadence with the processes, tools, and culture that support the safe and reliable operation of cloud-native applications has become the key strategic factor for software-driven organizations who are shipping software faster with reduced risk. When you are able to release software more rapidly, you get a tighter feedback loop that allows you to respond more effectively to the needs of customers.

Continuous delivery is why software is becoming cloud-native: shipping software faster to reduce the time of your feedback loop. DevOps is how we approach the cultural and technical changes required to fully implement a cloud-native strategy. Microservices is the software architecture pattern used most successfully to expand your development and delivery operations and avoid slow, risky, monolithic deployment strategies. It’s difficult to succeed, for example, with a microservices strategy when you haven’t established a “fail fast” and “automate first” DevOps culture.

Continuous delivery, DevOps, and microservices describe the why, how, and what of being cloud-native. These competitive advantages are quickly becoming the ante to play the software game. In the most advanced expression of these concepts they are intertwined to the point of being inseparable. This is what it means to be cloud-native.

What does cloud native do for you?

A cloud-native approach to the software delivery lifecycle enables you to effectively operate and scale, achieving “agility”: the ability to quickly add new functionality to your software while remaining stable and secure in production. A cloud-native approach does this by fully automating the infrastructure, developer middleware, and backing services used to run your applications.

This approach goes beyond the ad-hoc automation built on top of traditional virtualization-oriented orchestration. A fully cloud-native architecture includes automation and orchestration that works for you instead of requiring you to write automations as custom recipes. It’s difficult to maintain ad-hoc, bespoke automation in a cloud-native environment. The built-in automated management serves as a contract that enforces policy and keeps promises. In other words, the automation makes it easy to build applications that can be automatically managed.

With a new infrastructure approach comes new requirements for how software is developed. Developers must use a new set of architectural practices – such as microservices and containers – to ensure that their applications can be properly managed by the cloud platform. Not only does the speed of software development increase. There are operational benefits: portable application instances, consistent logging, and monitoring that keep the applications up and the data flowing.

One way to explore the benefits of the cloud-native approach is to think in terms of a runtime contract. This is a set of guidelines for running software. Cloud-native frameworks help developers write applications which conform to a cloud platform’s runtime contract.

Cloud-native frameworks

One of the key insights about cloud-native applications is that they conform to a contract designed to maximize resilience through predictable behaviors. The highly automated, container-driven infrastructure used in cloud platforms drives the way software is written. Developers must change how they code, creating a new “contract” between developers and the the infrastructure that their applications run on. A good example of such a “contract” is illustrated by the twelve principles first documented as the twelve factor app.

Many of the twelve factors overlap and support each other. They aspire to be as direct and actionable as possible:

Deploy to multiple environments from one codebase – a single code base, including production artifacts, ensures a single source of truth, leading to fewer configuration errors and more resiliency.

Manage dependencies declaratively – a cloud platform will take these dependency declarations and properly manage them to ensure that the cloud applications always have the libraries and services needed.

Use configuration information stored in the environment – environment variables provide a clean, well-understood, standardized approach to configuration, esp. for stateless applications written in a variety of programming languages.

Separate the build, release, and run stages – the build pipeline for cloud-native applications moves much of the release configuration to the “development” phase, where a release includes the actual built code and production configuration needed to run the application.

Run as stateless processes – the speed and cost efficiencies of cloud-native infrastructure are enabled by keeping each layer in the application stack as light-weight as possible.

Expose services with port binding – service interfaces in cloud-native applications strongly prefer HTTP-based APIs as the common integration framework.

Scale horizontally by adding stateless processes – an emphasis on a stateless, share-nothing design means scaling can be achieved by relying on the underlying platform instead of clever, multi-threaded coding.

Start up fast and gracefully shutdown – assume that any given process can come and go at any time,

Run the same in development, staging, and production – because of the emphasis on automation and using the same cloud platform in each life-cycle stage, “works on my box” has more power if everyone is using the same “box.”

Log to standard output for aggregation and event response – when logging is handled by the cloud platform instead of libraries in the application, logging like a simple-minded utility is key.

Allow for ad-hoc tasks to be run as short-lived processes – in cloud-native approaches, management tasks can become simply another process instead of specialized tools, equally important is that they behave as such as avoid using “secret” APIs and internals.

Following these principles results in applications with consistent architectural interfaces built with a stateless, process-oriented design pattern, for the purpose of creating distributed applications which are cloud ready. Ruby on Rails revolutionized application frameworks with its opinionated, convention-over-configuration approach to web development. In the nine-and-a-half years since Rails’s first release we’ve learned as an industry the power of leveraging frameworks that conform to conventions, and in the world of cloud-native we should continue that trend.

Frameworks such as Spring Boot/Cloud and Dropwizard for Java, Seneca for node.js and even Ruby on Rails (with a few additional gems) conform nicely to the cloud ready contract. They’ll save you time and let you focus on writing the critical business logic at the heart of your app rather than the glue code to make it work.

When your application conforms to a runtime contract it can be orchestrated, managed, and scaled by an elastic, cloud-native runtime.

Cloud native runtimes

Containers have emerged as key components of a cloud runtime. Their lightweight nature and tight resource management aligns well with the cloud ready application approach, adding speed and resource efficiency. Containers package a cloud ready application into a single, executable artifact which can be made compliant with the cloud platform’s contract.

Just like any other process, you can run many containers on each host machine (bare metal or virtual machines, doesn’t matter). In the development phase, a container approach to building your applications allows developers to reduce the cycle-time to code and create a full build, even running developer-oriented clouds on their laptop that mimic the full blown production cloud runtime. In production, containers afford key benefits in better security between processes, stability, and predictable resource consumption of each running process. The next-level benefit is a greater ability to forecast infrastructure cost in response to growth.

To use containers effectively they must be orchestrated. Orchestration is a method to start, stop, and distribute containers across a pool of computational resources without manual human interactions or planning; an elastic runtime. This orchestration is done in response to deployment requests, traffic analysis for auto-scaling, and even in response to infrastructure failure. Full container orchestration also services the need to diagnose and rollback changes as well as managing different, experimental versions of your application in production to perform A/B testing and canary deployments. Simply packaging containers is just one part of what a cloud-native architecture needs: orchestrating and managing how those containers are deployed and behave in production is even more important than how containers are packaged.

With the rise of the cloud-native frameworks approach outlined above, the attributes of container orchestration have received much attention of late. Here is what you need from a container runtime:

Managing the create, run, destroy lifecycle – tight control over the lifecycle of each container running in production helps you automatically scale your application to meet demand.

Predictable resource utilization through constraints – containers give you fine-grained control over the resources used by each instance.

Process isolation – likewise, containers keep processes siloed from one another using kernel-level namespaces and local file systems.

Optimized resource utilization through orchestration – given a pool of resources, often a collection of virtual machines, containers are distributed and managed to distribute the load across the entire pool.

Methods to diagnose and recover from failure – things will go wrong in production, and the orchestration platform should respond to critical failures automatically by removing misbehaving instances and infrastructure and rebalancing the load to avoid downtime.

Cloud-native runtimes can be run on a wide variety of infrastructure and, through the use of APIs, are often agnostic about their infrastructure. Well-managed, automated infrastructure makes your cloud-native architecture more resilient.

Cloud-native infrastructure automation

Robust automation handles almost everything done by hand in traditional IT: updating routers and load balancers as application instances go up and down, provisioning and networking services for use by deployed apps, allocating new infrastructure, setting up monitoring and disaster recovery scenarios, log aggregation, and even redistributing workloads when infrastructure fails.

Advanced automation practices like these save you from the pain of zero day vulnerability remediation: your automation ought to do a rolling deploy to every node in your architecture to apply security patches with zero downtime.

This level of automation is provided by what we can call a structured platform.

Backing services broker – most applications require external backing services such as databases, caching solutions, and message queues, which should be offered by the platform as high availability services configured through the environment to fulfill the 12 factor contract for configuration.

Log aggregation – high availability, horizontally scaled applications need to have logs aggregated from all instances for analysis and fast response in the case of incidents.

Cloud-native infrastructure orchestration provides a structured platform down to the infrastructure. This is the layer that integrates with the underlying API; the foundational piece of the cloud-native architecture which allows the runtime orchestration to be installed, scaled, managed, and updated.

This is an overview of the high-level considerations for successfully delivering cloud-native applications. This is the path to drive down the fix cost, in time and stress, of operations while accelerating the delivery of software. If the cost of deployment and operations is high, continuous delivery and microservices are untenable. The focus here was primarily capabilities of the tools, but the high trust culture and process required to get there should not be understated. (One could even argue this is the most critical success factor, but that’s a much longer discussion.)

Becoming cloud-native

Those wanting to maximize the pace and benefits of continuous delivery will need an architecture that supports cloud-native applications, to serve as the enabling technology for the entire software delivery life-cycle. The enabling constraints of applications built with cloud-native frameworks contracting with cloud-native container runtimes, and cloud-native infrastructure automation that can keep promises transforms organizational capabilities for delivering software. That platform also includes the collection of practices and processes leveraged to realize the productivity promises of continuous delivery, agile development, the DevOps movement; and the availability and reliability of cloud infrastructure.

Becoming cloud-native enables stability and agility without sacrificing either. With a cloud-native architecture you can have the same power, flexibility, speed, and security that cloud-native companies like Netflix enjoy, which should finally give you time to catch up on all the My Little Pony: Friendship is Magic you’ve been missing.

Working in Internet infrastructure, web app security, and design taught Casey to be a paranoid, UX-oriented, problem-solving Internet plumber; his earliest contributions to Perl live to this day on your Mac. Casey’s speaking and writing ranges from open source communities and culture to technical architecture and automation tips and tricks. Casey West wears the mantle of Principal Technologist focused on Pivotal’s Cloud Foundry Platform and lives in Pittsburgh raising three sarcastic children.