Mostly on hacking things

Menu

TL;DR I agree that setting up your first test with CSS Critic can be a hurdle that needs overcoming. That’s why I created csscritic-examples.

One benefit of working at ThoughtWorks is that you get to see many different projects. In my last two years I’ve seen three different implementations of a user facing application and more importantly three different ways of delivering an application’s UI.

In the phase of each project I’ve sat down and mulled over how to set up a first user interface tests.

While CSS Critic tries to be as simple as possible to set up, your choice of application framework heavily influences the integration of the tool. Unlike other testing solutions (e.g. Selenium-based) that fire up your application through a web server, CSS Critic wants to be a lightweight solution by picking up small components of your application for testing. This forces the developer to carefully wire it into the development setup.

More clearly it needs you, the developer, to think about components of your UI architecture. Once parts of the UI are split into different files and dependencies are made clear (say for example your HTML template needs an adjacent JavaScript view implementation) you can define small and independent tests. This is the basic formulation of how CSS Critic would like to work.

The latest application I have worked on was an AngularJS-based single-page application. Working with an opinionated framework that AngularJS is I realized how difficult it can be to tap into the inner bits of a framework. While designed to be easy to set-up and use for day-to-day work, accessing inner components of a framework (and for the purpose of UI testing that means the templating system and view binding) can be difficult.

Gathering an extensive set of examples can only be a community-based effort and so I welcome contributions. If you, dear user of CSS Critic, have a new set-up to share, please start a pull-request and have others benefit from your good work.

This fall I gave a talk at JSConf EU about my journey developing rasterizeHTML.js and now the video finally is out, yay! The guys have done a really good job at recording both actual talk and slides.

In the talk I cover my motivation behind developing the “prollyfill” and describe the techniques used to work around the security and stability issues imposed by the browsers. I hope some of the fun I had developing the project is expressed in the talk.

A few weeks later some of the content is already outdated. One big change is that I managed to fix the worst HTML to XHTML conversion issues. I just released xmlserializer a pure JavaScript implementation to replace the built-in window.XMLSerializer of the browser, which has serious limitations and wasn’t able to convert some of the major parts of the documents. I can now render pages like twitter.com or flickr.com and up to 4 MB of HTML, JS & CSS is converted on the flow by the library. Even the old Monster Madness website recently re-discovered on Hacker News now more than 15 years old renders fine (sadly the site was taken down last week, but I’ll keep the local copy I made for future generations to see🙂 ).

From my experience you want to put mostly everything in software projects under versioning. Not only your production code, but also build and deployment scripts. This doesn't always prove to be straightforward. With the SCM Sync configuration plugin there is a beautiful plugin for versioning all Jenkins configuration under Git (or SVN). Here are my quick notes on how I got it running on my newest project.

Install the plugin through the normal Jenkins process

As described here create a SSH key for the Jenkins box and register it as a "deploy key" with Github: $ sudo -u jenkins bash$ ssh-keygenFollow the steps and create a key without a passphrase. Now copy the contents from ~/.ssh/id_rsa.pub to https://github.com/YOUR_USER/YOUR_REPO/settings/keys

The path under 3. should point to your Jenkins home folder. If your setup is similar to mine, you might not have a regular Jenkins user that you can log in with. Faking the HOME directory was the quickest way to get Git to accept the parameters.

This should be enough to have the given user now push to Github whenever somebody changes something in the configuration.

In case this post already shows up on the TW blog aggregator let me introduce myself very quickly: I’m a developer based in the Hamburg office and currently interested in solving the testing gap for the front-end.

For some time now I have been wondering why we test our source code so thoroughly but when it comes to CSS we just plainly stop caring about it.

Maybe I'm wrong, I'm still relatively new to the TDD business, but looking at my colleagues, everybody is quite eager to have their Java or JavaScript code covered. But speaking of CSS, there isn't much help around for doing tests here.

Looking at the test pyramid, it is mentioned that tests through the UI are brittle, in fact you are testing the whole stack from top to bottom and anything anywhere can go wrong. However that doesn't mean that testing the UI needs to be similarly brittle. In fact you can mock out the underlying functionality that your process rendering the UI depends on.

A broken UI can break the user experience just like a faulty functionally (i.e. source code) does. Especially in a bigger project where several people are involved possibly across teams it is hard to keep the UI consistent and errors out.

In my current project a glitch in the UI can keep the product owner from pushing the next release candidate to production. And there are several teams that together deliver a single page to the user, meaning that bits of the page including the layout come from different sources. In the end we need to make sure that everything comes together just right.

On top of that there is this browser issue. Each browser renders a page quite differently. Consistently checking that changes don't break the layout in browser X can be a very tedious manual task.

I've heard from some people that Selenium is used to do a screenshot comparison, i.e. regression testing on reference images. One example is Needle. There have been undertakings to test actual values of properties on DOM elements, e.g. at Nokia Maps.

Why am I saying all that? Because I'm currently looking into developing yet another css testing tool that I want to share with you.

My take on this problem builds on the image comparison technic similar to the Selenium stuff. However my approach is to keep the stack simple and to make it dead simple to use: everything should be done from inside a browser window.

With the feedback from my colleagues at ThoughtWorks I've set up a small project on Github to implement an experimental solution with the goal of driving out a feasible solution.

The steps to verify a layout should be pretty straightforward: A new page (either from production or a mock page) that includes the CSS under test is registered with a "regression runner". That is a simple HTML page running the test suite (if you know jasmine and its SpecRunner.html you get the point). On the first run the page under test is rendered and can be saved as a future reference. In subsequent runs this image is used for the page to be compared against. Running the tests is as simple as opening the runner in a browser. If the layout changes, the regression test will fail. If the change was intentional a new reference needs to be created, if not you found your bug.

Technically this works by rendering the page under test to a HTML5 canvas element inside the browser and using a handy JS library for doing a diff between the canvas and the reference image.

Open points: So far works in Firefox only, and as browsers do not render pages across systems consistently, the solution is local only.

I needed an element on a page to be resizable by the user. jQuery UI comes with a “resizable” implementation that, triggered on an element, will add a handler to it so that the user can resize by dragging and dropping. Othertoolsalsoexist.

However, the code I am writing does not depend on any major JS library. Just pulling in for example an additional 120KB (minified jQuery + minified & minimal jQuery UI) for this feature not to speak of adding even more dependencies was out of the question.

So here is a simple example to build on top of the CSS3 resize property.

First we enable the native resize handling by setting the appropriate CSS. Then we setup a listener for the onmouseup event that signals a possible drag & drop:

Based on http://jsfiddle.net/zFVyv/10/ I tried to work with the newer interfaces there are, however Chrome does not trigger correctly in connection with the resize property. The current solution has a benefit of only triggering once resizing is finished. Happy resizing!

I wanted to have my HTML documents in a rasterized form and had a look at the browser’s canvas. It turns out that you can draw a lot of things with a canvas inside an HTML page, however you cannot easily draw an HTML page inside a canvas.

The idea is pretty simple. SVG has a <foreignObject> element which allows mostly any HTML to be embedded. Such an SVG can then be easily drawn to the canvas using context.drawImage().

There is only one issue. Rendering SVGs is very restrictive. Loading of external resources is not allowed. The only way out is embedding CSS and images into the document. The latter by using data: URIs. If embedding of resources is done dynamically via JavaScript then there are further restrictions. Unless techniques such as CORS are used, you may only load content from a same origin.

Long story short, I sat down and started a small library that takes care of all the stuff that is needed to draw HTML to the canvas. Most of the code deals with finding elements in the DOM that need to be replaced, loading these resources and embedding them in the document. There are three convenience methods for drawing a DOM, an HTML string and/or a URL to the canvas easily.

After playing around with this for some days now I should mention that browser support seems a bit fragile. Firefox and Chrome are not consistent on rendering background images, and sometimes need a gental reload for doing so. Both Chrome and Safari have an issue with the origin-clean flag which made testing a bit more difficult. Stuff that turns up will be noted down in the wiki on Github. You can find the code here. I should probably file a few bug reports as a follow-up.

For me it was the first time dealing with a lot of asynchronous calls and it was fun to see how easy it was doable with JavaScript. Using JSHint and PhantomJS to run the Jasmine tests was easy and it just works. Also rasterizeHTML.js uses imagediff.js for testing that the results look just like the reference images. Travis CI makes sure I don’t break the build🙂 What proved difficult during testing and also implementation was that all three browsers, Firefox, Chrome and Safari behaved differently (and basically also PhantomJS as a forth). This is especially interesting for the two WebKit-based browsers. Chrome supports the BlobBuilder interface, recently deprecated, while Safari is waiting for the official Blob specs to come. In some respect Chrome was more similar to Firefox than WebKit. One way of assuring full tests was to fallback to a simple manual test on Chrome and Safari for some code parts, due to said origin-clean flag.

Whether you do Test Driven Development or just write your tests last, hopefully you have a good unit testing suite covering your code. It is very likely that you end up with unit test results in the JUnit XML format. Here is a short snippet on how to convert your XML reports into readble HTML.

In my current project we have Gradle as a build tool, and since it is easy to use ant from there, we will use the nice JUnitReport. The main issue was getting the classpath right, and the solution to that was to redefine the ant task, so to pass the right path along.

TL;DR If you have tests for Javascript code written in QUnit & Jasmine that depend on the Document Object Model (DOM), here is a way to set up Travis CI using PhantomJS.

My colleagues recently made me aware of a relatively new continuous integration software called Travis CI which, originally built to serve the Ruby community, is a distributed build service able to run code in various languages, including Python & Javascript. As far as I know, it currently only works together with Github, so your code needs to be hosted there.

As Travis’ workers (the ones running the actual build) come with node.js included, I played around a bit getting my QUnit tests to run with jsdom and the QUnit node adaptation. While there are some guides out there on how to test your Javascript with node.js, it gets complicated when depending on the DOM, which most likely is the case when you are developing a plugin for jQuery. However, after reading criticism on testing against something that the original system didn’t target (or are you running jQuery on the server side?) I gave up on that pretty quickly.

Now, in a different context I came upon PhantomJS, a headless browser stack based on Qt’s Webkit. It provides an executable that can work without a graphical system (hence the name headless) and is perfectly suited for testing environments. Ariya, the guy behind PhantomJS, is clearly aware of that and already provides the proper integration for running tests based on QUnit and Jasmine. The test runner is a neat piece of code, that just scrapes the QUnit output from the generate HTML. Installing that locally was easy and running the test suite provides a short output on how many tests were run and how many failed, if any.

The problem was getting PhantomJS running on Travis CI. Travis CI comes with a good set of software (and already includes some of PhantomJS’ dependencies); so far no one has written a cookbook for PhantomJS though. However, this guy came up with an easy solution, after all the worker is just a (virtual) Ubuntu machine and you can install anything on it.

So here is the quick run through: In the .travis.yml which describes the build, we

run a small install script setting up the remaining dependency of PhantomJS and PhantomJS itself,

start up a virtual framebuffer (xvfb, “headless” is not completely true when on Linux) running on port 99

We are ready to test this on Travis. If you haven’t registered there yet, get an account, set up the hook by visiting your profile page, and commit your own .travis.yml together with the PhantomJS install script and the relevant test runner described above. You should pretty quickly find your project in the build queue on travis-ci.org.

I have been running a wiki with structured data for some months now. It’s called CharacterDB and runs on MediaWiki with the SemanticMediaWiki framework. While I was happy to employ one of the best wikis out there (after all Wikipedia runs on MediaWiki) and I have good contacts to some of the guys developing SemanticMediaWiki, I do see some limitations to the task at hand. It makes me want to get rid of the current stack. The major two issues with the solution so far are scalability (> 60.000 entries, the actual count of database entries a far higher due to the RDF triple approach) and the difficulties I have with adjusting the configuration the way I want the input forms to work.

The natural solution for me was to look into a solution with Django. And more importantly using existing components. Looking into wiki apps for Django I found quite a few candidates – there is a fine comparison available under http://djangopackages.com/grids/g/wikis/. However I didn’t like what I saw. What I need is a simple component that would make my models editable by anybody, providing full wiki features. Some of the existing apps implement a full standalone wiki, including authentication, own markup parsers, …

get straight to the point, if it’s not a wiki feature, it shouldn’t be in

django-wikify wouldn’t exist without django-reversion, a neat app that adds versioning to your django models. It has seen many improvements lately and is just the right component to build a wiki on. All I needed to build on top was basically view logic.

Wiki markup can easily be integrated using django’s native markdown integration. No need to develop any additional code. What is missing on my list is support for subscriptions. That, however, I consider an orthogonal feature.

The current code already provides the minimal working set. To setup a wiki yourself you first need to define a model (after all django-wikify is about giving you the flexibility of your own page model). Here is a simple page model, with a title and content:

What you see here is a simple way to use wikify. Just decorate the view and pass the model with it. The only thing you need to take care of is to pass the object’s primary key as ‘object_id’, similar to Django’s default views. The project code includes an example django project as a short demo using the code shown here.

The way the @wikify decorator works, you do not need to change your urls.py definition. The action triggered by the user is passed through a GET variable called ‘action’. In case you want to provide your own template, just link to the url ‘?action=edit’ and you are done.

Next features on my list are support for a diff view based on my side-by-side diff implementation, and then improving performance through ETag, cache, … After that, better late then never, unit tests.