By submitting my Email address I confirm that I have read and accepted the Terms of Use and Declaration of Consent.

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

Chaos engineering, or chaos monkey, is a technique originally pioneered by Netflix to look for different and often drastic ways to break an application. The goal is to ensure that anything that can go wrong in production is tested and evaluated, usually before application deployment but sometimes in production too.

The problem is that testing today's web applications is far more complex than simply developing tests from requirements and running them in the lab, in a controlled and measured environment. There are so many moving parts to an application that traditional testing can't cover all of the possible use cases.

Further, the deployment environment today is far more complex than it has ever been, often involving multiple cloud data centers, different Internet segments, and different traffic routings. Throw in IoT devices, and there are literally an infinite number of possible combinations in the production environment.

Why are we trying to break our applications?

The computing world is a different place than a decade ago. Most applications are web-based, and/or delivered from the cloud. They include many third-party components, such as open source libraries, purchased code, and third-party services such as advertisements. And they frequently change, as agile project updates add new features or address issues.

And network services remain relatively fragile, especially in comparison to the internal datacenter. Data from dozens of servers and many users travels for thousands of miles, across an almost infinite number of possible routes, following the DNS address tables.

The goal is to ensure that anything that can go wrong in production is tested and evaluated, usually before application deployment but sometimes in production too.

Chaos engineering works from a detailed engineering process to define the normal behavior of a system, developing an experimental scenario (such as shutting down a server or breaking a network connection), then carrying out that scenario and comparing the resulting behavior to the normal behavior. Has performance changed? Have we lost availability? Is the application accessible from different parts of the world? Does the application still work, but is lacking certain essential features?

Building on application testing

Chaos engineering is important because these are not applications and features that can be tested to any reasonable extent in a lab before deployment. It's not possible to accurately replicate worldwide usage, multiple DNS services, and third-party services before deployment. Testers do the best that they can, but it is by necessity limited and ultimately unrepresentative of production use.

What does this have to do with testing? Many of the most competent testers I've met get at least some motivation out of breaking things. This gives them more options to do so. And the problem is far more complex than it was a decade ago,

So yes, chaos engineering is an emerging skill set for testers. Think of it as extreme exploratory testing, that involves not only the application but also the operating systems, servers (locally or in the cloud), databases, and network. Testers have a wide range of tools available to look at parts of not only the application but also the delivery environment.

It is a disciplined approach, but it can also be enjoyable and professionally fulfilling. Testers get the opportunity to help make sure that an application can achieve a very high uptime, and is relatively protected against attacks and other types of actions.

Any application can be broken, whether through the code, the network, the cloud provider, or the hardware. The question is what type of event, or combination of events, will make it happen. And when it does, the application doesn't necessarily come back to the lab. Instead, it's diagnosed and at least mitigated in real time, before it starts costing the organization money or reputation.

Testers may argue that organizations have no control over performance and availability on the Internet, so this is out of the purview of testing. But yes, they do. They can have multiple host providers, around the world. They can have secondary DNS services. They can do intelligent traffic routing. It is our responsibility as guardians of the organization's reputation to broaden our own and our organizations' point of view.

Testers are used to focusing on the application and its code in a vacuum. That approach is changing, but not fast enough. Testers need to look at complex interactions between code, services, network, and provider. Future tests have to include the ability to take all of these factors into account, even breaking some of them through the practice of chaos engineering.

1 comment

Register

Login

Forgot your password?

Your password has been sent to:

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy