Web Development

Mouse Tracking with JavaScript and Perl

Mouse Tracking with JavaScript and Perl

The Perl Journal April 2003

By Peter Sergeant

Peter is employed as a web developer by Virus Bulletin (http:// www.virusbtn.com/). He can be contacted at tpj@clueball.com.

You've spent months juggling requirements from many departments around the company, liaising with graphic designers, battling various browser quirks, getting copy approved, and churning out code. You've bravely battled Apache, assimilated a templating ideology, and written a flotilla of scripts to turn access logs into pretty graphs to show management. The resulting web site is beautiful and is backed by enough cool code to ensure your bragging rights for the next five years at your local Perl Mongers meetings.

Trouble is, none of your users can find anythingit turns out, nobody else quite "gets" your idea of basing the site navigation around a fishing analogy, and judging by your pretty graphs, not a single visitor has found the online store. Seems that in the rush to production, nobody thought about site usability.

Something needs to be done, and fast. But usability testing can be expensive, and good usability advice can be hard to find. What you need is a way to watch your users navigate the site, find out where they hesitate, figure out what elements draw their attention and which they ignore, all without requiring your intervention or extra funds. Even in this situation, Perl can help.

The Plan

In this article, we're going to build a system to track users' mouse-trails across pages on a web site, and then graph them in Perl. We'll look briefly at how we record the raw data (using JavaScript), how we collect it, and how we can use Perl to generate useful and management-friendly graphs to track down usability problems. We'll end the article with a quick look at privacy issues raised by doing this.

Gathering the Data

Please note: The JavaScript below is written to work with Internet Explorer. It's trivial to port to Mozilla or Opera, but it won't work as is in either.

Like any other user-interface toolkit, JavaScript in the browser allows you to set handlers for many types of user interaction. We're interested in knowing when the user has moved the mouse, and when the user clicks the mouse. Setting these handlers up is easy:

getMouseXY and setMouseClick are the names of the functions we want to call. If we can have a function invoked each time the mouse is moved, and we can retrieve the coordinates of the mouse, then it stands to reason that we could just create a large data structure with an entry for each time getMouseXY and setMouseClick are invoked, and pass it back to the server. Sadly, it's a little more complicated.

First, we need timing information about each movement we record so that we can accurately recreate the user's browsing experienceif a user's mouse hovered stationary over a part of the page for 30 seconds, we want to know, rather than just knowing their mouse was there at some point. Second, if we collect coordinates for each movement the mouse makes, we will end up with far too much datawe need to decide just what level of granularity we want or passing the data around becomes impractical. Finally, passing the data back to the server doesn't happen automatically. We need a nonintrusive way to do it.

Storing our data in a data structure that can understand timewell, one that can understand numeric representations of time, anywayeasily solves the first two problems. Simply, if we index our mouse coordinates against time and represent time as a number, we can store our data in an array. The code below clarifies a little:

Here, we divide by 10 the number of milliseconds (in epoch milliseconds) elapsed between the page loading and the current time, and we store the coordinates in our data structure using the time as their index. If the user moves the mouse quickly, then the last coordinates gathered in that 10-millisecond time frame are the ones we keep. To record clicks, we use an almost identical method (as we're storing them in the same data structure), only we store them at currentTime--1, so they won't be overwritten.

When it's time to send the data back to the server, we go through movementStore, adding the entries to another string. If there's no data at a specific entry, it means the user hadn't moved the mouse, so we add a single character denoting this. For reasons explained later, when the string gets to 3500 characters, we stop. We also add the dimensions of the user's screen to the string, as that information will come in handy.

Now we need to pass this string back to the server. GET and POST are out, as they'd involve dynamically changing the links on our page, and that doesn't qualify as nonintrusive. Therefore, when the user leaves the page, we set them a cookie containing the data string we created. This data string can be a maximum of 4-KB long, hence the restriction on the aforementioned number of characters. If the next page they land on is one on our web site, the cookie is retrieved by a tiny CGI script pretending to be an image, and added to a flat file for later use.

Working with the Data

At this point, we have lots of options. On a site with fairly high traffic, you'll have built up quite a large selection of data in very little time. So what can we do with it that's useful? When I first built this system, I did two things with it. First, I wrote some more JavaScript and some Perl so that we could look at individual (and effectively anonymous) users navigating the pageI'd recreate their experience by using a small image to represent the mouse and exactly emulate their mouse as they accessed the page. That approach requires a little Perl and a lot of JavaScript, so we're not going to pursue that avenue in this article.

The second thing I did was to create an image depicting hot spots on the pageby taking a large amount of user data, splitting the page into little chunks, and seeing how many times users' mice had been logged as being in that section. When navigating a web site, users' mouse movements tend to follow their eyes. We can exploit this tendency to see which parts of the page the users are concentrating on and which they are ignoring. I found out, to my dismay, that users were barely considering my (in my opinion) ultra-useful, quick-links section (see Figure 1). I was able to experiment with putting it in different places on the page to see where it got the most user attention.

To plot the data, we're going to use Imager, an open-source Perl image-manipulation library. We'll take an image of the web site at a certain resolution to start with, and cover it in blue squares, the opacity of which will show the "popularity" of that square.

Creating Our Chart

There are several things we need to consider before jumping straight into chart generation. First, the world isn't perfect. We're going to get people who, despite having a high screen resolution, don't have their browser set to fill the whole screen, so their results are going to be slightly funky, especially if the target page is centered rather than left aligned. Also, people may leave their browser open while they go and get themselves a coffeethat will result in one user generating a huge number of hits on a very small part of the screen. Still, we want to try and get meaningful results from the data.

For this reason, we're going to do two things to normalize our data a little. We're only going to allow a given user to affect the graph a certain amounteach user has an equal number of "points" they can use on the graph in total. For example, if we have 10 users, and each user has a maximum of 10 points they can assign to a square, the maximum rating for a square is going to be 100. We'll also ignore all hits to a square over a certain number. This number is fairly arbitrary, and one you'll want to experiment with a little. We're going to use 20 as our starting point.

At this point, we can begin writing our program, which is shown in Listing 1. The program needs to read in our data file created by the aforementioned JavaScript/CGI combination. To simplify matters, we're also only going to deal with people of a certain screen resolution to begin with, so we weed out other entries. This is dealt with in line 34 of the Listing.

As we go through each user, we need to make a note of the last coordinates we read for them, so that if we come across a blank entry that denotes that the cursor didn't move, we know where it was. We create a hash for each user, with the coordinates for the top-left pixel of each square as keys, and increment the value of each hash entry when we find a match. We define the holders for the last coordinates on line 37 and create the user hash on line 38. We then merge the user's hash into the main hash in lines 60-69.

The next step is to build the image itself. We start off by reading the image of the page into an Imager object (line 80) and make sure the image can handle transparency by adding an alpha channel (line 81). We then go through the hash, key by key, and add a blue box for each entry in the hash. To get the opacity for a given number of hits, we keep a running total of the highest number of hits we have (lines 65-67), divide the maximum amount of opacity we want by that (we'll use 200 in this case, but again, that's fairly arbitrary), and then multiply the hit count by it when creating the box fill (lines 97-100). We then draw the box (lines 103-111), and output the image (line 116).

Interpreting the Data

I've included an image created from data gathered on the Virus Bulletin web site (http://www.virusbtn.com/) over a three-day period (see Figure 1). Some time has already been spent analyzing data from this site and improving it, but let's see what we can learn from the image.

The navigation bar at the top is clearly the darkest area, and for fairly obvious reasons. If we look down the page a little, we see that our quick links and search box, considering they're "below the fold,'' are fairly popular. However, and noticeably, it seems barely anyone is interested in the links on the Slammer storyit seems a fair number started reading it (there are some dark patches near the top), but not so many followed through. Perhaps for the next "Virus profile," it's not worth including links and there should be more copy instead.

(Note how the areas under the main navigation bar are darker than those on the top. People will slow down their mouse as they approach a link, so this suggests people are approaching the links from the bottom. Why? Think about the shape of the default Windows cursor...)

Other Projects

The only other project of this ilk that I've found is called "Cheese" (http://cac.media.mit.edu/cheese.htm), which came from MIT. Cheese aims to look for patterns in the way that users browse the Web, in order to give people a more personalized browsing experience. However, the project seems to have somewhat faded from attention.

Privacy

So is it wrong to collect this data from users? If you're collecting this sort of data, make sure your privacy policy spells out why you're collecting it and what you intend to do with it. Go as far as you can to make sure the data isn't personally identifiable. There's little reason to also capture the IP address. At the end of the day, your browser gives away an awful lot of information about you anyway: the links you take through the site, the browser software you're using, your IP address (thus, your ISP and quite possibly your physical location). If you can convince people you're using the data responsibly, you'll run less risk of people taking issue with it.

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task.
However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

This month's Dr. Dobb's Journal

This month,
Dr. Dobb's Journal is devoted to mobile programming. We introduce you to Apple's new Swift programming language, discuss the perils of being the third-most-popular mobile platform, revisit SQLite on Android
, and much more!