Abstract:

By now you have likely heard about DevOps. It’s quickly gaining adoption. But what is it? And why should you care? DevOps is all about creating a culture of high collaboration between development and operations with a goal of optimizing the entire software delivery pipeline—from code commit to features running in production. This enables organizations to deliver value into production faster and at a lower cost—even enabling multiple production deployments per day. Imagine the competitive advantage gained by delivering new features in hours or days rather than weeks, months or quarters.

This talk will show how DevOps improves agility by optimizing the delivery pipeline. We’ll look at common patterns and anti-patterns. We’ll see the kind of tools needed to automate and manage the ever increasing number of servers and applications modern organizations need. We’ll also discuss the benefits and costs of adopting a DevOps culture.

Here is a taste of some of the things we will discuss:

Get ops involved up front rather than at the end, so deployment and monitoring issues are found early and rework is reduced.

Treat infrastructure as code so it is automated, repeatable, and under version control.

Ensure your development and test environments are identical to production (or as close as possible). This helps catch issues sooner rather than in production.

Deploy more frequently so you are dealing with a smaller batch of changes. This is easier to manage, and less likely to fail.

If you struggle with deployments, or your ops team is constantly fighting fires and drowning in unplanned work, this talk is for you. Come see how DevOps can improve the agility of your organization.

On Friday I gave this presentation on Microservices at the Keep Austin Agile 2015 conference in Austin, TX. Below you will find the video and slides as well.

I presented Microservices as a solution that solves some very difficult problems, but it does so by swapping those problems of some other problems that are easier to solve. To use Microservices, your organization has to be mature enough to solve the problems Microservices introduce. It’s not a “free lunch” as Benjamin Wootten would say. I cover several patterns and practices that help in solving these problems, as well as common anti-patterns and pitfalls that I have fallen into.

I this series, I want to touch on one of the biggest traps people fall into with test automation: writing too many high-level tests. I have made this painful mistake and struggled with constant test failures and spent many hours troubleshooting things that weren’t even problems in production. They were just bad (flaky) tests. I finally found my way out of that mess, and hopefully I can help you do the same.

For the beginners here, I’ll start with what levels you can write tests at and why lower level tests are more valuable. I’ll show you why testing through the UI layer is so painful, and how to push higher level tests down.

I’ll try to keep each post short and to the point. Without further ado, here is part 1.

Levels of Testing

There many different ways to test your code, but they can all be boiled down into three main categories or levels.

FYI: The names of the levels may differ based on who you are talking to, but the underlying concepts are the same. Seems no one can agree on what the best names are for these things.

Unit Tests

Unit tests are the lowest level tests. These tests are very focused and usually only test a few lines of code. They run completely in-memory and never access external resources like disk or network so they are very fast. Because of this, you can run them often and get very fast feedback on errors. I run mine every few minutes. When they fail, I can usually just hit CTRL-Z to undo my last few changes and they are passing again. No need to debug!

Even when they fail later (when I can’t just hit CTRL-Z), the problem is usually obvious. There’s only a few lines of code it tests, so I don’t have to look far for the problem. For the same reason, there’s only a few things that could actually cause the test to fail, so they don’t fail that often.

Unit Tests are very low cost. Easy to write. Easy to maintain.

Integration Tests

Integration tests validate the interactions of multiple classes with each other, and the environment (like the disk, network, databases, etc.). These tests are inherently slower than unit tests, so you can’t run them as often. This means it may take longer before you realize you introduced an error.

Plus, since they cover much more code, there are more reasons these tests can fail, so they tend to fail more often than unit tests. And when they do, sifting through more code means it takes longer to figure out where the problem is.

Integration tests take more effort to build and maintain than unit tests. They are harder to debug, and take longer to identify issues.

UI or End-to-End Tests

These tests are sometimes called “functional” tests as well. They test the fully deployed system as a black box, just like a user would use it. These usually interact directly with the UI.

These tests exercise all the code in the system, from the UI down to the database. It also exercises all the third party resources or external systems as well. So if anything in this chain breaks, the test will fail. And when they fail, because so many things can cause it to fail, it’s often very hard to determine what caused the failure. I often find myself sifting through log files and logging in to remote servers to figure out what the heck broke. Not fun.

These tests are also the slowest to run, so they aren’t run very often at all. So when you introduce an error, it may be a long time before you realize it. By then you have moved on to something else so it takes additional effort to get your head back around the problem so you can debug it.

These tests are brittle, and very difficult to maintain. They have the highest cost of all the types of tests you can write. They do have value, but it comes with a cost.

The Obvious Conclusion

So, given what you just read, where is the most valuable place to focus your testing efforts? Yeah, I don’t even need to give you the answer. It’s pretty self-evident:

Always test at the lowest level possible.

When you have tests at a high level, it’s best to “push them down” as far as you can. That’s what this series is all about.

So, take a look at your tests. Where have you focused all your efforts? Are you fighting to keep the tests running? Is there any correlation between the two? If so, stay tuned, we’ll look at ways to fix this.

Abstract

It’s easy to write tests, but it’s not so easy to maintain them over time. Tests should not be a drain on your productivity, they should enhance it. However, many teams struggle just to keep their tests running—sacrificing time they could be spending developing valuable new features. How can we avoid these pitfalls? What practices and principles are effective? Which ones lead to productivity drain? This talk seeks to answer those questions and more by separating the effective practices and principles from the ineffective ones.

Topics covered:

Where should we focus testing efforts? Unit Tests, Integration Tests or UI Tests?

How can we best use mocks and stubs without creating fragile tests?

How can we instill a culture of testing?

How should I handle test data in our tests?

How can I use Fluent Data Builders and Anonymous Data to simplify testing?

TL;DR – You can tell WebDriver to automatically ignore untrusted SSL certificates on Firefox by setting the “webdriver_assume_untrusted_issuer” preference to false in the Firefox profile.

We recently ran into an issue where our tests were failing because Firefox was showing the “This Connection is Untrusted” window. Firefox was complaining that our SSL certificate was not from a trusted source (this happens when you use self-signed certs for development). Here is the screen we were seeing:

Googling the issue brought up a lot of solutions for java, but none that worked for C#. We called it a night, and the next morning, when I got in, my coworkers Jason Bilyeu and Carl Cornett had solved the issue. They found that you can set the “webdriver_assume_untrusted_issuer” preference to “false” in the Firefox profile and it will ignore the cert.