Categories

Related tags

Automated CSS Regressive Testing in Practice

17 August 2017

We write unit-tests tests for server-side source code and for JavaScript. Even when putting aside benefits of TDD approach it gives us a priceless thing – early problem reporting during refactoring. So you make change, you run tests and you will know at once if anything broke. What about CSS? You encapsulate a declaration set into a separate rule, look through the site pages where dependent classes used and satisfied proceed with refactoring. After finishing your work you test the site thoroughly, opening every page, every modal, drop-down and expandable. Now you find out that on the very first change you broke the styles of a component that shows only by user action so missed it back then. It turns out that refactoring decision wasn't quite good. But it's late to change.

If only we could go with automated testing... In fact we can. We can use a tool that visits all the specified project URLs in a headless browser on a predefined list of viewports, takes screenshots and reports if any don't match to their earlier saved versions. It's called regression testing as we use it to make sure that existing green-lighted product state hasn't changed. There are plenty of tools for that. The one we take is PhantomCSS. It requires CasperJS testing framework, so first we need to install the framework:

npm install -g casperjs

Now we can install PhantomCSS:

npm i phantomcss --save-dev

Note: In my case, for some reason, it didn't install dependency ResembleJS. So I simply entered ./node_modules/phantomcss and ran there explicitly:

npm i resemblejs

We are good, but before writing any tests, consider that your site likely has a dynamic content. So any change of the content will make PhantomCSS fail on the tests. The tool is simply comparing screenshots and reports if the difference goes out of the configured approximility range. What to do? My solution is to keep a set of style-guide pages accessible only on dev environment. This static HTML pages contain all the styled components of the project filled in with placeholder images and “lorem ipsum” text. Honestly it's not only for automated CSS testing. It's a fine way to improve reusability of your CSS. If all of your components lay out of their contexts gracefully, they are well portable and you can simply take any and copy to a new place.

Let's say we have a style guide for modals (http://localhost/stylguide/pmodal):

We assume that the page in the current state looks precisely as intended. So we want to test that with every refactoring iteration the page looks the same.

This was the first run, so PhantomCSS just created pattern screenshots.

We do a change in CSS code that shifts modal content text a bit to the right.

.content {
padding-left: 32px
}

Now we run the tests again:

Nice! PhantomCSS recognized that the initial pattern is broken. Besides it placed into ./failures visualization of them mismatch

Thus, we were early notified about the problem. So we fix the style (just remove the padding) and run test to verify the initial state is back:

Obviously PhantomCSS doesn't fully guarantee that the feel & look didn't change during refactoring. For example it cannot check animation styles. But anyway that's an indispensable tool that makes developer life much easier.

We write unit-tests tests for server-side source code and for JavaScript. Even when putting aside benefits of TDD approach it gives us a priceless thing – early problem reporting during refactoring. So you make change, you run tests and you will know at once if anything broke. What about CSS? You encapsulate a declaration set into a separate rule, look through the site pages where dependent classes used and satisfied proceed with refactoring. After finishing your work you test the site thoroughly, opening every page, every modal, drop-down and expandable. Now you find out that on the very first change you broke the styles of a component that shows only by user action so missed it back then. It turns out that refactoring decision wasn't quite good. But it's late to change.

If only we could go with automated testing... In fact we can.

Who's the dude?

Dmitry Sheiko is a web-developer living and working in Frankfurt am Main, DE