Categories

Related tags

Nightmare of End-to-End Testing

28 September 2016

Modern application get complex, we cannot go without automated testing. The canonical agile testing quadrants are split to technology-facing and business-facing tests. As for technology-facing testing I believe nowadays everybody has dealt with unit-tests. Thus we make sure that the smallest parts of the system act as intended in isolation. Also we use component tests to verify the behavior of large parts of the system and integration tests to check if communication between object isn't broken. The entire quadrant is all about programmer low-level tests proving that the code meets all the design requirements. These tests are meant to control internal quality, to minimizes technical debt, and to inform dev-team members of a problem of early stages.

Business-facing (or acceptance) tests describe the system in non-programming terms and they ignore the component architecture of the system. Here we know testing techniques such as functional, story-based, prototypes, simulations. Apparently in web-development the most demanded approach is end-to-end testing. This one is performed on the application level and tests whether the business requirements are met regardless of app internal architecture, dependencies, data integrity and such. Actually we make the test runner to follow the end-user flows and assert they get the intended experience.

In order to perform end-to-end testing we need an execution environment (browser automation library) and a testing framework. It seems like today the most popular way to get in-browser testing API is by using Selenium WebDriver. Here goes a family of related frameworks: Selenium WebDriverIO, WebDriverJs, wd, and Nightwatch.js. I personally not a big fun of WebDriver. Debugging and tracing what happening in the browser during the test with WebDriver is quite problematic. One cannot run the tests until the local server is started. The scraping process is considerably slow.

As an alternative one can take a look at Zombie.js and Casper.js. Both are testing frameworks using headless browsers. Zombie.js has own browser and Casper.js supports PhantomJS (WebKit) and Slimer (Gecko). While fiddling with Zombie.js I found it interesting in general, but too verbose when it comes to real test-suits. Casper.js provides you own execution context. I would prefer to stay with node.js like I do for example with Mocha.

Another framework DalekJS allows you run tests under PhantomJS or in a real browser. I was quite impressed by it a year ago, but giving up waiting for a stable release.

In Angular-community the undoubted leader Protractor. Though it's too much Angular to my taste.

I've been looking for a solution that is easy to use and to debug and after all the research I selected an automating library with unsavory name Nightmare.js. Their approach just hit me by its simplicity. Instead of wrapping WebDriver or PhantomJS they just run an instance of Electron. So it gives us a headless browser with extensive browser API. Nightmare.js is framework agnostic. So I can use it with Mocha and with assertion library of my choice. What I like most one can enable a mode where Nightmare.js shows you whatever is happening in the browser. Besides, you can break the test and examine the browser window with DevTools.

Yeah, it doesn't allow to run tests in real browsers, but that's something I can sacrifice in my case.

Here we obtain references to Nightmare and chai.expect libraries. We set BASE_URL constant to address the endpoint from the tests. I also suggest to have a common handler for possible testing errors onError. Eventually we create an instance of Nightmare. In the options we ask Nightmare to show the browser during testing, set in-input typing interval 20ms and wait polling interval 50ms.

Here we extend the default timeout as end-to-end tests may take much longer than unit-tests. I prefer clean up any possible products of previous tests on set up rather that on tear-down. This way we start with a blank list even after the spec broke never reaching after() method. So within before method we open out page under test and evaluate JavaScript that cleans up localStorage where the app stores user input.

Here we test if a new item can be added to the list. First we refresh the page (after the localStorage was cleaned) and we wait .new-todo element to get available in the DOM. It is supposed to be there as soon as page is loaded. Then we emulate user typing in the input "watch GoT" and pressing Enter. Now it has to wait until any item appear in the list. So the Nightmare will poll every 50ms (as specified in pollInterval option) until the condition is met. When list is rendered we can evaluate JavaScript querying for list items and assert that only 1 item was added.

As we run the test

mocha tests/end-to-end/todo.spec.js

a browser window pops up showing the bot typing in input as we described. When we finish writing the tests we disable it with Nightmare initialization option show: false. Besides the mocha output the test results

Here we test if the added item can be removed. So we emulate a click on the remove button of the first list item. Then we wait until .main section get hidden (that's from app specification ). For that we use a callback function. Nightmare will poll every 50ms until it returns a truthy value. Then we can assert the list is empty.

Tricks

As I already mentioned whenever anything goes wrong with the test we can examine the page for exact break point with DevTools. We just need to enable it during Nightmare initialization:

In the body of a test we can stop the flow .wait( 600000 ) (do not forget to extend timeout respectively) and during the pause use the DevTools.

In real application we have flows like submit a form, update a component. When testing we need to know when the page reload or component rendering really happens. As for the page readiness we can listen from wait() callback for the next page loaded event. In order to know when component gets updated, within the app I increment the value of component bounding element's data-rev attribute with every rendering. Therefore I can watch from wait() callback for the next number on it.

Modern application get complex, we cannot go without automated testing. The canonical agile testing quadrants are split to technology-facing and business-facing tests. As for technology-facing testing I believe nowadays everybody has dealt with unit-tests. Thus we make sure that the smallest parts of the system act as intended in isolation. Also we use component tests to verify the behavior of large parts of the system and integration tests to check if communication between object isn't broken. The entire quadrant is all about programmer low-level tests proving that the code meets all the design requirements. These tests are meant to control internal quality, to minimizes technical debt, and to inform dev-team members of a problem of early stages.

Business-facing (or acceptance) tests describe the system in non-programming terms and they ignore the component architecture of the system. Here we know testing techniques such as functional, story-based, prototypes, simulations. Apparently in web-development the most demanded approach is end-to-end testing. This one is performed on the application level and tests whether the business requirements are met regardless of app internal architecture, dependencies, data integrity and such. Actually we make the test runner to follow the end-user flows and assert they get the intended experience.

Who's the dude?

Dmitry Sheiko is a web-developer living and working in Frankfurt am Main, DE