31 Days of Testing—Day 19: Refactoring a “Monster” Functional Test, Part 1

Folks new to functional test automation can often times dive straight off the deep end when trying to get started. I’ve often seen examples of “my first test!” where those test scripts/files/fixtures are hundreds of steps long and test many different cases. They’ll start out with one script that starts the browser, loads up the site, logs on as an administrative user, creates some users and test data, logs on in a different role, takes some actions using the setup data, does some validations, then logs back on as the administrative user and cleans up the data created to run the test.

A good automation test fixture/case/script is concise and focused on testing one specific thing. Data setup should be handled in either helper scripts or pushed off to a backing framework that calls stored procedures, web services, or internal APIs to perform setup data. Good cases shouldn’t generally be switching between roles; that same sort of configuration or prerequisite actions should be handled again by helpers or APIs.

These “monster” automation scripts suffer from exactly the same maintainability and functionality issues that bad code in the system’s codebase does: high complexity, mixed concerns, duplication of functionality, etc. Monster scripts are nothing but a recipe for pain—regardless of the good intentions by the folks who created them in the first place.

Over the next few blog posts I’m going to take an example of one of these monster scripts and break it apart into a better set of more focused test cases and support modules. I’ve had to do this more than once with scripts written by customers, contacts, pals, and me.

Aufpassen! Before you dive in to this kind of refactoring, step back and see if it’s a worthwhile use of your time. You may likely be better off just rewriting the dang thing. I’m rolling with it now because it’s an interesting exercise and I hope you’ll find it useful.

For this set of posts I’ll be using the demo app my group at Telerik had built to help us demonstrate specific testing problem areas like AJAX, multiple windows, etc. You can find the demo app hosted out at Heroku. Logon credentials are testuser/abc123 if you feel like playing around a bit.

This example is using Test Studio, but the concepts are absolutely the same regardless of what testing framework/tool you’re using.

Let’s start off by discussing what the intended test is: validating that a newly created user can successfully be retrieved from the system. We might express that test case in the form

Given I have created a new user with input data of <a list here>

When I click on that user in the main grid

Then I should see the new user displayed with their correct data.

I built a single script to run this test soup to nuts – exactly the sort of “monster” script I talked about above. In Test Studio this iteration of the test has 32 steps (33 lines below because of a continuation). I did a little transform-fu of the test file to come up with some :

1: Navigate to : 'http://localhost:3000/'

2: Click 'LoginLinkLink'

3: Set 'UsernameText' text to 'testuser'

4: Set 'PasswordPassword' text to 'abc123'

5: Click 'LoginButtonSubmit'

6: Click 'NewContactLink'

7: Connect to pop-up window : 'http://localhost:3000/contacts/new'

8: Set 'ContactFirstNameText' text to 'New'

9: Set 'ContactLastNameText' text to 'User'

10: Set 'ContactEmailEmail' text to 'new.user@foo.com'

11: Set 'ContactLinkedinProfileText' text to 'http://linkedin.com/newuser'

The test logs on (lines 1-5), creates a new user (6-16), verifies the user was correctly created on the main grid (17-21), figures out which link on the main grid to click to open the new user in the edit contact form (22-23), opens the new user in the edit contact form (24-25), then validates all the new user’s info is as expected (26-31). Finally we close out any new browser windows that were spawned.

Things would look very similar if I was using Watir, Selenium, or some other testing tool. The syntax would obviously be different, but the idea of stringing together huge numbers of steps in a functional test is a common one, and no framework or tool automatically fixes this issue for you…

Here’s a video of the test in action:

Before diving in to clean this mess up, let’s take a step back and figure out some goals to get us back in line with how a good script should work:

No duplication, or at least minimal duplication carefully selected for clarity

Minimize navigation or unneeded steps

Careful bullet-proofing of the tests

Before I do anything else, I’ll ask myself: is this test worth working on? Answer, yes, firstly because if not, then I’d have to figure out something else to write this blog post on. Secondly, this test has plenty of things of value in it which can get moved out to other useful steps.

After a bit of planning, here’s how I’m going to run with this:

Move the steps for logging on to a separate test (define the action once, reuse it as needed)

Move the steps for creating a new user out to an external API call

Let’s focus on the first thing: modularizing the login action. It’s easy to separate out that action. In a code-based framework I’d move that off to a new Page Object. In Test Studio I’m moving it off to its own test.

One additional step here. Generally I believe in avoiding conditionals in tests—that usually indicates you’re looking at two different tests. However, long experience in making a logon function modular has made me a believer in first checking that you actually need to log on—you may already be logged on, and the test would fail, breaking your script.

Do this by first checking whether the login link appears. If that’s there, log on. Otherwise, take no action.

1 comment:

Thank you Jim for bringing this topics in your post, it's easy to discuss technologies and tests tool but its hard to find good information about tests automation methodologies. Everybody are doing some unit/system tests but only few have good structure.

Can't wait for the next day post, what about mobile application tests ? seems like the next big challenge .

About Me

I'm the owner/principal of Guidepost Systems. I help lots of great folks figure out what works and what doesn't in the world of delivering quality software -- something I'm very passionate about. I'm also a Father trying to remain sane while trying to build great software, herd my kids around, fix school lunches and handle the yardwork. (And roast great coffee!)